00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1822 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3088 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.057 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.073 > git --version # timeout=10 00:00:00.093 > git --version # 'git version 2.39.2' 00:00:00.093 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.094 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.094 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.809 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.820 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.830 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:02.830 > git config core.sparsecheckout # timeout=10 00:00:02.840 > git read-tree -mu HEAD # timeout=10 00:00:02.854 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:02.871 Commit message: "inventory/dev: add missing long names" 00:00:02.872 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:02.949 [Pipeline] Start of Pipeline 00:00:02.963 [Pipeline] library 00:00:02.964 Loading library shm_lib@master 00:00:02.964 Library shm_lib@master is cached. Copying from home. 00:00:02.978 [Pipeline] node 00:00:02.984 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.987 [Pipeline] { 00:00:02.996 [Pipeline] catchError 00:00:02.998 [Pipeline] { 00:00:03.008 [Pipeline] wrap 00:00:03.017 [Pipeline] { 00:00:03.025 [Pipeline] stage 00:00:03.026 [Pipeline] { (Prologue) 00:00:03.204 [Pipeline] sh 00:00:03.482 + logger -p user.info -t JENKINS-CI 00:00:03.500 [Pipeline] echo 00:00:03.501 Node: GP6 00:00:03.508 [Pipeline] sh 00:00:03.796 [Pipeline] setCustomBuildProperty 00:00:03.807 [Pipeline] echo 00:00:03.808 Cleanup processes 00:00:03.812 [Pipeline] sh 00:00:04.091 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.092 1526586 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.104 [Pipeline] sh 00:00:04.386 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.386 ++ grep -v 'sudo pgrep' 00:00:04.386 ++ awk '{print $1}' 00:00:04.386 + sudo kill -9 00:00:04.386 + true 00:00:04.399 [Pipeline] cleanWs 00:00:04.409 [WS-CLEANUP] Deleting project workspace... 00:00:04.409 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.415 [WS-CLEANUP] done 00:00:04.419 [Pipeline] setCustomBuildProperty 00:00:04.432 [Pipeline] sh 00:00:04.710 + sudo git config --global --replace-all safe.directory '*' 00:00:04.761 [Pipeline] nodesByLabel 00:00:04.762 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.770 [Pipeline] httpRequest 00:00:04.773 HttpMethod: GET 00:00:04.774 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.777 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.796 Response Code: HTTP/1.1 200 OK 00:00:04.796 Success: Status code 200 is in the accepted range: 200,404 00:00:04.796 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:10.096 [Pipeline] sh 00:00:10.376 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:10.396 [Pipeline] httpRequest 00:00:10.401 HttpMethod: GET 00:00:10.402 URL: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:00:10.402 Sending request to url: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:00:10.422 Response Code: HTTP/1.1 200 OK 00:00:10.422 Success: Status code 200 is in the accepted range: 200,404 00:00:10.423 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:00:46.604 [Pipeline] sh 00:00:46.889 + tar --no-same-owner -xf spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:00:49.428 [Pipeline] sh 00:00:49.707 + git -C spdk log --oneline -n5 00:00:49.707 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:49.707 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:00:49.707 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:00:49.707 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:00:49.707 4506c0c36 test/common: Enable inherit_errexit 00:00:49.724 [Pipeline] withCredentials 00:00:49.732 > git --version # timeout=10 00:00:49.745 > git --version # 'git version 2.39.2' 00:00:49.759 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:49.761 [Pipeline] { 00:00:49.769 [Pipeline] retry 00:00:49.770 [Pipeline] { 00:00:49.785 [Pipeline] sh 00:00:50.064 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:50.335 [Pipeline] } 00:00:50.358 [Pipeline] // retry 00:00:50.361 [Pipeline] } 00:00:50.378 [Pipeline] // withCredentials 00:00:50.388 [Pipeline] httpRequest 00:00:50.391 HttpMethod: GET 00:00:50.392 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:50.392 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:50.399 Response Code: HTTP/1.1 200 OK 00:00:50.400 Success: Status code 200 is in the accepted range: 200,404 00:00:50.400 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.126 [Pipeline] sh 00:01:18.410 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.801 [Pipeline] sh 00:01:20.084 + git -C dpdk log --oneline -n5 00:01:20.084 caf0f5d395 version: 22.11.4 00:01:20.084 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:20.084 dc9c799c7d vhost: fix missing spinlock unlock 00:01:20.084 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:20.084 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:20.095 [Pipeline] } 00:01:20.113 [Pipeline] // stage 00:01:20.122 [Pipeline] stage 00:01:20.124 [Pipeline] { (Prepare) 00:01:20.149 [Pipeline] writeFile 00:01:20.168 [Pipeline] sh 00:01:20.451 + logger -p user.info -t JENKINS-CI 00:01:20.466 [Pipeline] sh 00:01:20.751 + logger -p user.info -t JENKINS-CI 00:01:20.791 [Pipeline] sh 00:01:21.079 + cat autorun-spdk.conf 00:01:21.079 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.079 SPDK_TEST_NVMF=1 00:01:21.079 SPDK_TEST_NVME_CLI=1 00:01:21.079 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.079 SPDK_TEST_NVMF_NICS=e810 00:01:21.079 SPDK_TEST_VFIOUSER=1 00:01:21.079 SPDK_RUN_UBSAN=1 00:01:21.079 NET_TYPE=phy 00:01:21.079 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:21.079 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.087 RUN_NIGHTLY=1 00:01:21.093 [Pipeline] readFile 00:01:21.123 [Pipeline] withEnv 00:01:21.125 [Pipeline] { 00:01:21.141 [Pipeline] sh 00:01:21.425 + set -ex 00:01:21.425 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:21.425 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.425 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.425 ++ SPDK_TEST_NVMF=1 00:01:21.425 ++ SPDK_TEST_NVME_CLI=1 00:01:21.425 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.425 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.425 ++ SPDK_TEST_VFIOUSER=1 00:01:21.425 ++ SPDK_RUN_UBSAN=1 00:01:21.425 ++ NET_TYPE=phy 00:01:21.425 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:21.425 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.425 ++ RUN_NIGHTLY=1 00:01:21.425 + case $SPDK_TEST_NVMF_NICS in 00:01:21.425 + DRIVERS=ice 00:01:21.425 + [[ tcp == \r\d\m\a ]] 00:01:21.425 + [[ -n ice ]] 00:01:21.425 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.425 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:21.425 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:21.425 rmmod: ERROR: Module irdma is not currently loaded 00:01:21.425 rmmod: ERROR: Module i40iw is not currently loaded 00:01:21.425 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:21.425 + true 00:01:21.425 + for D in $DRIVERS 00:01:21.425 + sudo modprobe ice 00:01:21.425 + exit 0 00:01:21.434 [Pipeline] } 00:01:21.456 [Pipeline] // withEnv 00:01:21.462 [Pipeline] } 00:01:21.484 [Pipeline] // stage 00:01:21.495 [Pipeline] catchError 00:01:21.497 [Pipeline] { 00:01:21.514 [Pipeline] timeout 00:01:21.515 Timeout set to expire in 40 min 00:01:21.516 [Pipeline] { 00:01:21.533 [Pipeline] stage 00:01:21.535 [Pipeline] { (Tests) 00:01:21.552 [Pipeline] sh 00:01:21.835 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.835 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.835 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.835 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:21.835 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.835 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.835 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:21.835 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.835 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.835 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.835 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.835 + source /etc/os-release 00:01:21.835 ++ NAME='Fedora Linux' 00:01:21.835 ++ VERSION='38 (Cloud Edition)' 00:01:21.835 ++ ID=fedora 00:01:21.835 ++ VERSION_ID=38 00:01:21.835 ++ VERSION_CODENAME= 00:01:21.835 ++ PLATFORM_ID=platform:f38 00:01:21.835 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.835 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.835 ++ LOGO=fedora-logo-icon 00:01:21.835 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.835 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.835 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.835 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.835 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.835 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.835 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.835 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.835 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.835 ++ SUPPORT_END=2024-05-14 00:01:21.835 ++ VARIANT='Cloud Edition' 00:01:21.835 ++ VARIANT_ID=cloud 00:01:21.835 + uname -a 00:01:21.835 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.835 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.214 Hugepages 00:01:23.214 node hugesize free / total 00:01:23.214 node0 1048576kB 0 / 0 00:01:23.214 node0 2048kB 0 / 0 00:01:23.214 node1 1048576kB 0 / 0 00:01:23.214 node1 2048kB 0 / 0 00:01:23.214 00:01:23.214 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.214 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:23.214 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:23.214 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:23.214 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:23.214 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:23.214 + rm -f /tmp/spdk-ld-path 00:01:23.214 + source autorun-spdk.conf 00:01:23.214 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.214 ++ SPDK_TEST_NVMF=1 00:01:23.215 ++ SPDK_TEST_NVME_CLI=1 00:01:23.215 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.215 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.215 ++ SPDK_TEST_VFIOUSER=1 00:01:23.215 ++ SPDK_RUN_UBSAN=1 00:01:23.215 ++ NET_TYPE=phy 00:01:23.215 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.215 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.215 ++ RUN_NIGHTLY=1 00:01:23.215 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.215 + [[ -n '' ]] 00:01:23.215 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.215 + for M in /var/spdk/build-*-manifest.txt 00:01:23.215 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.215 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.215 + for M in /var/spdk/build-*-manifest.txt 00:01:23.215 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.215 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.215 ++ uname 00:01:23.215 + [[ Linux == \L\i\n\u\x ]] 00:01:23.215 + sudo dmesg -T 00:01:23.215 + sudo dmesg --clear 00:01:23.215 + dmesg_pid=1527381 00:01:23.215 + [[ Fedora Linux == FreeBSD ]] 00:01:23.215 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.215 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.215 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.215 + sudo dmesg -Tw 00:01:23.215 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.215 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.215 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.215 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.215 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.215 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.215 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.215 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.215 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.215 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.215 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.215 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.215 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.215 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.215 Test configuration: 00:01:23.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.215 SPDK_TEST_NVMF=1 00:01:23.215 SPDK_TEST_NVME_CLI=1 00:01:23.215 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.215 SPDK_TEST_NVMF_NICS=e810 00:01:23.215 SPDK_TEST_VFIOUSER=1 00:01:23.215 SPDK_RUN_UBSAN=1 00:01:23.215 NET_TYPE=phy 00:01:23.215 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.215 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.215 RUN_NIGHTLY=1 16:22:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.215 16:22:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.215 16:22:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.215 16:22:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.215 16:22:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.215 16:22:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.215 16:22:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.215 16:22:30 -- paths/export.sh@5 -- $ export PATH 00:01:23.215 16:22:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.215 16:22:30 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.215 16:22:30 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:23.474 16:22:30 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715782950.XXXXXX 00:01:23.474 16:22:30 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715782950.neGDaU 00:01:23.474 16:22:30 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:23.474 16:22:30 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:23.474 16:22:30 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.474 16:22:30 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:23.474 16:22:30 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.474 16:22:30 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.474 16:22:30 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:23.474 16:22:30 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:23.475 16:22:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.475 16:22:30 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:23.475 16:22:30 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:23.475 16:22:30 -- pm/common@17 -- $ local monitor 00:01:23.475 16:22:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.475 16:22:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.475 16:22:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.475 16:22:30 -- pm/common@21 -- $ date +%s 00:01:23.475 16:22:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.475 16:22:30 -- pm/common@21 -- $ date +%s 00:01:23.475 16:22:30 -- pm/common@25 -- $ sleep 1 00:01:23.475 16:22:30 -- pm/common@21 -- $ date +%s 00:01:23.475 16:22:30 -- pm/common@21 -- $ date +%s 00:01:23.475 16:22:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715782950 00:01:23.475 16:22:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715782950 00:01:23.475 16:22:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715782950 00:01:23.475 16:22:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715782950 00:01:23.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715782950_collect-vmstat.pm.log 00:01:23.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715782950_collect-cpu-load.pm.log 00:01:23.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715782950_collect-cpu-temp.pm.log 00:01:23.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715782950_collect-bmc-pm.bmc.pm.log 00:01:24.413 16:22:31 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:24.413 16:22:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.413 16:22:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.413 16:22:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.413 16:22:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.413 Wed May 15 02:22:31 PM UTC 2024 00:01:24.413 16:22:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.413 v24.05-pre-662-g253cca4fc 00:01:24.413 16:22:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.413 16:22:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.413 16:22:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.413 16:22:31 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:24.413 16:22:31 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:24.413 16:22:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.413 ************************************ 00:01:24.414 START TEST ubsan 00:01:24.414 ************************************ 00:01:24.414 16:22:31 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:24.414 using ubsan 00:01:24.414 00:01:24.414 real 0m0.000s 00:01:24.414 user 0m0.000s 00:01:24.414 sys 0m0.000s 00:01:24.414 16:22:31 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:24.414 16:22:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.414 ************************************ 00:01:24.414 END TEST ubsan 00:01:24.414 ************************************ 00:01:24.414 16:22:31 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:24.414 16:22:31 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:24.414 16:22:31 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:24.414 16:22:31 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:24.414 16:22:31 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:24.414 16:22:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.414 ************************************ 00:01:24.414 START TEST build_native_dpdk 00:01:24.414 ************************************ 00:01:24.414 16:22:31 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:24.414 caf0f5d395 version: 22.11.4 00:01:24.414 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:24.414 dc9c799c7d vhost: fix missing spinlock unlock 00:01:24.414 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:24.414 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.414 16:22:31 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:24.414 patching file config/rte_config.h 00:01:24.414 Hunk #1 succeeded at 60 (offset 1 line). 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:24.414 16:22:31 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.607 The Meson build system 00:01:28.607 Version: 1.3.1 00:01:28.607 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.607 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:28.607 Build type: native build 00:01:28.607 Program cat found: YES (/usr/bin/cat) 00:01:28.607 Project name: DPDK 00:01:28.607 Project version: 22.11.4 00:01:28.607 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.607 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:28.607 Host machine cpu family: x86_64 00:01:28.607 Host machine cpu: x86_64 00:01:28.607 Message: ## Building in Developer Mode ## 00:01:28.607 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.607 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:28.607 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.607 Program objdump found: YES (/usr/bin/objdump) 00:01:28.607 Program python3 found: YES (/usr/bin/python3) 00:01:28.607 Program cat found: YES (/usr/bin/cat) 00:01:28.608 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:28.608 Checking for size of "void *" : 8 00:01:28.608 Checking for size of "void *" : 8 (cached) 00:01:28.608 Library m found: YES 00:01:28.608 Library numa found: YES 00:01:28.608 Has header "numaif.h" : YES 00:01:28.608 Library fdt found: NO 00:01:28.608 Library execinfo found: NO 00:01:28.608 Has header "execinfo.h" : YES 00:01:28.608 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.608 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.608 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.608 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.608 Run-time dependency openssl found: YES 3.0.9 00:01:28.608 Run-time dependency libpcap found: YES 1.10.4 00:01:28.608 Has header "pcap.h" with dependency libpcap: YES 00:01:28.608 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.608 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.608 Compiler for C supports arguments -Wformat: YES 00:01:28.608 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.608 Compiler for C supports arguments -Wformat-security: NO 00:01:28.608 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.608 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.608 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.608 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.608 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.608 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.608 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.608 Compiler for C supports arguments -Wundef: YES 00:01:28.608 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.608 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.608 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.608 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.608 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.608 Compiler for C supports arguments -mavx512f: YES 00:01:28.608 Checking if "AVX512 checking" compiles: YES 00:01:28.608 Fetching value of define "__SSE4_2__" : 1 00:01:28.608 Fetching value of define "__AES__" : 1 00:01:28.608 Fetching value of define "__AVX__" : 1 00:01:28.608 Fetching value of define "__AVX2__" : (undefined) 00:01:28.608 Fetching value of define "__AVX512BW__" : (undefined) 00:01:28.608 Fetching value of define "__AVX512CD__" : (undefined) 00:01:28.608 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:28.608 Fetching value of define "__AVX512F__" : (undefined) 00:01:28.608 Fetching value of define "__AVX512VL__" : (undefined) 00:01:28.608 Fetching value of define "__PCLMUL__" : 1 00:01:28.608 Fetching value of define "__RDRND__" : 1 00:01:28.608 Fetching value of define "__RDSEED__" : (undefined) 00:01:28.608 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.608 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.608 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.608 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.608 Checking for function "getentropy" : YES 00:01:28.608 Message: lib/eal: Defining dependency "eal" 00:01:28.608 Message: lib/ring: Defining dependency "ring" 00:01:28.608 Message: lib/rcu: Defining dependency "rcu" 00:01:28.608 Message: lib/mempool: Defining dependency "mempool" 00:01:28.608 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.608 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.608 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.608 Compiler for C supports arguments -mpclmul: YES 00:01:28.608 Compiler for C supports arguments -maes: YES 00:01:28.608 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.608 Compiler for C supports arguments -mavx512bw: YES 00:01:28.608 Compiler for C supports arguments -mavx512dq: YES 00:01:28.608 Compiler for C supports arguments -mavx512vl: YES 00:01:28.608 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.608 Compiler for C supports arguments -mavx2: YES 00:01:28.608 Compiler for C supports arguments -mavx: YES 00:01:28.608 Message: lib/net: Defining dependency "net" 00:01:28.608 Message: lib/meter: Defining dependency "meter" 00:01:28.608 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.608 Message: lib/pci: Defining dependency "pci" 00:01:28.608 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.608 Message: lib/metrics: Defining dependency "metrics" 00:01:28.608 Message: lib/hash: Defining dependency "hash" 00:01:28.608 Message: lib/timer: Defining dependency "timer" 00:01:28.608 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:28.608 Compiler for C supports arguments -mavx2: YES (cached) 00:01:28.608 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.608 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:28.608 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:28.608 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:28.608 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:28.608 Message: lib/acl: Defining dependency "acl" 00:01:28.608 Message: lib/bbdev: Defining dependency "bbdev" 00:01:28.608 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:28.608 Run-time dependency libelf found: YES 0.190 00:01:28.608 Message: lib/bpf: Defining dependency "bpf" 00:01:28.608 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:28.608 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.608 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.608 Message: lib/distributor: Defining dependency "distributor" 00:01:28.608 Message: lib/efd: Defining dependency "efd" 00:01:28.608 Message: lib/eventdev: Defining dependency "eventdev" 00:01:28.608 Message: lib/gpudev: Defining dependency "gpudev" 00:01:28.608 Message: lib/gro: Defining dependency "gro" 00:01:28.608 Message: lib/gso: Defining dependency "gso" 00:01:28.608 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:28.608 Message: lib/jobstats: Defining dependency "jobstats" 00:01:28.608 Message: lib/latencystats: Defining dependency "latencystats" 00:01:28.608 Message: lib/lpm: Defining dependency "lpm" 00:01:28.608 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.608 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:28.608 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:28.608 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:28.608 Message: lib/member: Defining dependency "member" 00:01:28.608 Message: lib/pcapng: Defining dependency "pcapng" 00:01:28.608 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.608 Message: lib/power: Defining dependency "power" 00:01:28.608 Message: lib/rawdev: Defining dependency "rawdev" 00:01:28.608 Message: lib/regexdev: Defining dependency "regexdev" 00:01:28.608 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.608 Message: lib/rib: Defining dependency "rib" 00:01:28.608 Message: lib/reorder: Defining dependency "reorder" 00:01:28.608 Message: lib/sched: Defining dependency "sched" 00:01:28.608 Message: lib/security: Defining dependency "security" 00:01:28.608 Message: lib/stack: Defining dependency "stack" 00:01:28.608 Has header "linux/userfaultfd.h" : YES 00:01:28.608 Message: lib/vhost: Defining dependency "vhost" 00:01:28.608 Message: lib/ipsec: Defining dependency "ipsec" 00:01:28.608 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.608 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:28.608 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:28.608 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:28.608 Message: lib/fib: Defining dependency "fib" 00:01:28.608 Message: lib/port: Defining dependency "port" 00:01:28.608 Message: lib/pdump: Defining dependency "pdump" 00:01:28.608 Message: lib/table: Defining dependency "table" 00:01:28.608 Message: lib/pipeline: Defining dependency "pipeline" 00:01:28.608 Message: lib/graph: Defining dependency "graph" 00:01:28.608 Message: lib/node: Defining dependency "node" 00:01:28.608 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:28.608 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:28.608 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:28.608 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:28.608 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:28.608 Compiler for C supports arguments -Wno-unused-value: YES 00:01:29.999 Compiler for C supports arguments -Wno-format: YES 00:01:29.999 Compiler for C supports arguments -Wno-format-security: YES 00:01:29.999 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:29.999 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:29.999 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:29.999 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:29.999 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:29.999 Compiler for C supports arguments -mavx2: YES (cached) 00:01:29.999 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.999 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.999 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.999 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:29.999 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:29.999 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.999 Configuring doxy-api.conf using configuration 00:01:29.999 Program sphinx-build found: NO 00:01:29.999 Configuring rte_build_config.h using configuration 00:01:29.999 Message: 00:01:29.999 ================= 00:01:29.999 Applications Enabled 00:01:29.999 ================= 00:01:29.999 00:01:29.999 apps: 00:01:29.999 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:29.999 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:29.999 test-security-perf, 00:01:29.999 00:01:29.999 Message: 00:01:29.999 ================= 00:01:29.999 Libraries Enabled 00:01:29.999 ================= 00:01:29.999 00:01:29.999 libs: 00:01:29.999 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:29.999 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:29.999 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:29.999 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:29.999 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:29.999 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:29.999 table, pipeline, graph, node, 00:01:29.999 00:01:29.999 Message: 00:01:29.999 =============== 00:01:29.999 Drivers Enabled 00:01:29.999 =============== 00:01:29.999 00:01:29.999 common: 00:01:29.999 00:01:29.999 bus: 00:01:29.999 pci, vdev, 00:01:29.999 mempool: 00:01:29.999 ring, 00:01:29.999 dma: 00:01:29.999 00:01:29.999 net: 00:01:29.999 i40e, 00:01:29.999 raw: 00:01:29.999 00:01:29.999 crypto: 00:01:29.999 00:01:29.999 compress: 00:01:29.999 00:01:29.999 regex: 00:01:29.999 00:01:29.999 vdpa: 00:01:29.999 00:01:29.999 event: 00:01:29.999 00:01:29.999 baseband: 00:01:29.999 00:01:29.999 gpu: 00:01:29.999 00:01:29.999 00:01:29.999 Message: 00:01:29.999 ================= 00:01:29.999 Content Skipped 00:01:29.999 ================= 00:01:29.999 00:01:29.999 apps: 00:01:29.999 00:01:29.999 libs: 00:01:29.999 kni: explicitly disabled via build config (deprecated lib) 00:01:29.999 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:29.999 00:01:29.999 drivers: 00:01:29.999 common/cpt: not in enabled drivers build config 00:01:29.999 common/dpaax: not in enabled drivers build config 00:01:29.999 common/iavf: not in enabled drivers build config 00:01:29.999 common/idpf: not in enabled drivers build config 00:01:29.999 common/mvep: not in enabled drivers build config 00:01:29.999 common/octeontx: not in enabled drivers build config 00:01:29.999 bus/auxiliary: not in enabled drivers build config 00:01:29.999 bus/dpaa: not in enabled drivers build config 00:01:29.999 bus/fslmc: not in enabled drivers build config 00:01:29.999 bus/ifpga: not in enabled drivers build config 00:01:29.999 bus/vmbus: not in enabled drivers build config 00:01:29.999 common/cnxk: not in enabled drivers build config 00:01:29.999 common/mlx5: not in enabled drivers build config 00:01:29.999 common/qat: not in enabled drivers build config 00:01:29.999 common/sfc_efx: not in enabled drivers build config 00:01:29.999 mempool/bucket: not in enabled drivers build config 00:01:29.999 mempool/cnxk: not in enabled drivers build config 00:01:29.999 mempool/dpaa: not in enabled drivers build config 00:01:29.999 mempool/dpaa2: not in enabled drivers build config 00:01:29.999 mempool/octeontx: not in enabled drivers build config 00:01:29.999 mempool/stack: not in enabled drivers build config 00:01:29.999 dma/cnxk: not in enabled drivers build config 00:01:29.999 dma/dpaa: not in enabled drivers build config 00:01:29.999 dma/dpaa2: not in enabled drivers build config 00:01:29.999 dma/hisilicon: not in enabled drivers build config 00:01:29.999 dma/idxd: not in enabled drivers build config 00:01:29.999 dma/ioat: not in enabled drivers build config 00:01:29.999 dma/skeleton: not in enabled drivers build config 00:01:29.999 net/af_packet: not in enabled drivers build config 00:01:29.999 net/af_xdp: not in enabled drivers build config 00:01:29.999 net/ark: not in enabled drivers build config 00:01:29.999 net/atlantic: not in enabled drivers build config 00:01:29.999 net/avp: not in enabled drivers build config 00:01:29.999 net/axgbe: not in enabled drivers build config 00:01:29.999 net/bnx2x: not in enabled drivers build config 00:01:29.999 net/bnxt: not in enabled drivers build config 00:01:29.999 net/bonding: not in enabled drivers build config 00:01:29.999 net/cnxk: not in enabled drivers build config 00:01:29.999 net/cxgbe: not in enabled drivers build config 00:01:29.999 net/dpaa: not in enabled drivers build config 00:01:29.999 net/dpaa2: not in enabled drivers build config 00:01:30.000 net/e1000: not in enabled drivers build config 00:01:30.000 net/ena: not in enabled drivers build config 00:01:30.000 net/enetc: not in enabled drivers build config 00:01:30.000 net/enetfec: not in enabled drivers build config 00:01:30.000 net/enic: not in enabled drivers build config 00:01:30.000 net/failsafe: not in enabled drivers build config 00:01:30.000 net/fm10k: not in enabled drivers build config 00:01:30.000 net/gve: not in enabled drivers build config 00:01:30.000 net/hinic: not in enabled drivers build config 00:01:30.000 net/hns3: not in enabled drivers build config 00:01:30.000 net/iavf: not in enabled drivers build config 00:01:30.000 net/ice: not in enabled drivers build config 00:01:30.000 net/idpf: not in enabled drivers build config 00:01:30.000 net/igc: not in enabled drivers build config 00:01:30.000 net/ionic: not in enabled drivers build config 00:01:30.000 net/ipn3ke: not in enabled drivers build config 00:01:30.000 net/ixgbe: not in enabled drivers build config 00:01:30.000 net/kni: not in enabled drivers build config 00:01:30.000 net/liquidio: not in enabled drivers build config 00:01:30.000 net/mana: not in enabled drivers build config 00:01:30.000 net/memif: not in enabled drivers build config 00:01:30.000 net/mlx4: not in enabled drivers build config 00:01:30.000 net/mlx5: not in enabled drivers build config 00:01:30.000 net/mvneta: not in enabled drivers build config 00:01:30.000 net/mvpp2: not in enabled drivers build config 00:01:30.000 net/netvsc: not in enabled drivers build config 00:01:30.000 net/nfb: not in enabled drivers build config 00:01:30.000 net/nfp: not in enabled drivers build config 00:01:30.000 net/ngbe: not in enabled drivers build config 00:01:30.000 net/null: not in enabled drivers build config 00:01:30.000 net/octeontx: not in enabled drivers build config 00:01:30.000 net/octeon_ep: not in enabled drivers build config 00:01:30.000 net/pcap: not in enabled drivers build config 00:01:30.000 net/pfe: not in enabled drivers build config 00:01:30.000 net/qede: not in enabled drivers build config 00:01:30.000 net/ring: not in enabled drivers build config 00:01:30.000 net/sfc: not in enabled drivers build config 00:01:30.000 net/softnic: not in enabled drivers build config 00:01:30.000 net/tap: not in enabled drivers build config 00:01:30.000 net/thunderx: not in enabled drivers build config 00:01:30.000 net/txgbe: not in enabled drivers build config 00:01:30.000 net/vdev_netvsc: not in enabled drivers build config 00:01:30.000 net/vhost: not in enabled drivers build config 00:01:30.000 net/virtio: not in enabled drivers build config 00:01:30.000 net/vmxnet3: not in enabled drivers build config 00:01:30.000 raw/cnxk_bphy: not in enabled drivers build config 00:01:30.000 raw/cnxk_gpio: not in enabled drivers build config 00:01:30.000 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:30.000 raw/ifpga: not in enabled drivers build config 00:01:30.000 raw/ntb: not in enabled drivers build config 00:01:30.000 raw/skeleton: not in enabled drivers build config 00:01:30.000 crypto/armv8: not in enabled drivers build config 00:01:30.000 crypto/bcmfs: not in enabled drivers build config 00:01:30.000 crypto/caam_jr: not in enabled drivers build config 00:01:30.000 crypto/ccp: not in enabled drivers build config 00:01:30.000 crypto/cnxk: not in enabled drivers build config 00:01:30.000 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.000 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.000 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.000 crypto/mlx5: not in enabled drivers build config 00:01:30.000 crypto/mvsam: not in enabled drivers build config 00:01:30.000 crypto/nitrox: not in enabled drivers build config 00:01:30.000 crypto/null: not in enabled drivers build config 00:01:30.000 crypto/octeontx: not in enabled drivers build config 00:01:30.000 crypto/openssl: not in enabled drivers build config 00:01:30.000 crypto/scheduler: not in enabled drivers build config 00:01:30.000 crypto/uadk: not in enabled drivers build config 00:01:30.000 crypto/virtio: not in enabled drivers build config 00:01:30.000 compress/isal: not in enabled drivers build config 00:01:30.000 compress/mlx5: not in enabled drivers build config 00:01:30.000 compress/octeontx: not in enabled drivers build config 00:01:30.000 compress/zlib: not in enabled drivers build config 00:01:30.000 regex/mlx5: not in enabled drivers build config 00:01:30.000 regex/cn9k: not in enabled drivers build config 00:01:30.000 vdpa/ifc: not in enabled drivers build config 00:01:30.000 vdpa/mlx5: not in enabled drivers build config 00:01:30.000 vdpa/sfc: not in enabled drivers build config 00:01:30.000 event/cnxk: not in enabled drivers build config 00:01:30.000 event/dlb2: not in enabled drivers build config 00:01:30.000 event/dpaa: not in enabled drivers build config 00:01:30.000 event/dpaa2: not in enabled drivers build config 00:01:30.000 event/dsw: not in enabled drivers build config 00:01:30.000 event/opdl: not in enabled drivers build config 00:01:30.000 event/skeleton: not in enabled drivers build config 00:01:30.000 event/sw: not in enabled drivers build config 00:01:30.000 event/octeontx: not in enabled drivers build config 00:01:30.000 baseband/acc: not in enabled drivers build config 00:01:30.000 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:30.000 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:30.000 baseband/la12xx: not in enabled drivers build config 00:01:30.000 baseband/null: not in enabled drivers build config 00:01:30.000 baseband/turbo_sw: not in enabled drivers build config 00:01:30.000 gpu/cuda: not in enabled drivers build config 00:01:30.000 00:01:30.000 00:01:30.000 Build targets in project: 316 00:01:30.000 00:01:30.000 DPDK 22.11.4 00:01:30.000 00:01:30.000 User defined options 00:01:30.000 libdir : lib 00:01:30.000 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.000 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:30.000 c_link_args : 00:01:30.000 enable_docs : false 00:01:30.000 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:30.000 enable_kmods : false 00:01:30.000 machine : native 00:01:30.000 tests : false 00:01:30.000 00:01:30.000 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.000 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:30.000 16:22:36 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:30.000 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:30.000 [1/745] Generating lib/rte_telemetry_def with a custom command 00:01:30.000 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:30.000 [3/745] Generating lib/rte_kvargs_def with a custom command 00:01:30.000 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:30.000 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.000 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.000 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.000 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.000 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.000 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.000 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.000 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:30.259 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.259 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.259 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.259 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.259 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.259 [18/745] Linking static target lib/librte_kvargs.a 00:01:30.259 [19/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.260 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.260 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.260 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.260 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.260 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.260 [25/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.260 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.260 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.260 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.260 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.260 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.260 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.260 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:30.260 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.260 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.260 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.260 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:30.260 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.260 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.260 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.260 [40/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.260 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.260 [42/745] Generating lib/rte_eal_def with a custom command 00:01:30.260 [43/745] Generating lib/rte_eal_mingw with a custom command 00:01:30.260 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.260 [45/745] Generating lib/rte_ring_def with a custom command 00:01:30.260 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.260 [47/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:30.260 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.260 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:30.260 [50/745] Generating lib/rte_ring_mingw with a custom command 00:01:30.260 [51/745] Generating lib/rte_rcu_mingw with a custom command 00:01:30.260 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.260 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.260 [54/745] Generating lib/rte_rcu_def with a custom command 00:01:30.520 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.520 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.520 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:30.520 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:30.520 [59/745] Generating lib/rte_mempool_def with a custom command 00:01:30.520 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.520 [61/745] Generating lib/rte_mbuf_def with a custom command 00:01:30.520 [62/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:30.520 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.520 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.520 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:30.520 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:30.520 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.520 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.520 [69/745] Generating lib/rte_meter_def with a custom command 00:01:30.520 [70/745] Generating lib/rte_net_def with a custom command 00:01:30.520 [71/745] Generating lib/rte_net_mingw with a custom command 00:01:30.520 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.520 [73/745] Generating lib/rte_meter_mingw with a custom command 00:01:30.520 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:30.520 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.520 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.520 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.520 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:30.520 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.520 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:30.520 [81/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:30.520 [82/745] Linking static target lib/librte_ring.a 00:01:30.520 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:30.784 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:30.784 [85/745] Generating lib/rte_pci_def with a custom command 00:01:30.784 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:30.784 [87/745] Linking static target lib/librte_meter.a 00:01:30.784 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.784 [89/745] Generating lib/rte_pci_mingw with a custom command 00:01:30.784 [90/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:30.784 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.784 [92/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.784 [93/745] Linking static target lib/librte_pci.a 00:01:30.784 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:30.784 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.047 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.047 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.047 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.047 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.047 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.047 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.047 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.047 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.047 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.047 [105/745] Generating lib/rte_cmdline_def with a custom command 00:01:31.047 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.047 [107/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:31.047 [108/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.047 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.047 [110/745] Linking static target lib/librte_telemetry.a 00:01:31.047 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.047 [112/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.047 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.047 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.307 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:31.307 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:01:31.307 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.307 [118/745] Generating lib/rte_hash_mingw with a custom command 00:01:31.307 [119/745] Generating lib/rte_hash_def with a custom command 00:01:31.307 [120/745] Generating lib/rte_timer_def with a custom command 00:01:31.307 [121/745] Generating lib/rte_timer_mingw with a custom command 00:01:31.307 [122/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.307 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:31.307 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.565 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.565 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.565 [127/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.566 [128/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.566 [129/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:31.566 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.566 [131/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:31.566 [132/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.566 [133/745] Generating lib/rte_acl_def with a custom command 00:01:31.566 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:31.566 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.566 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:31.566 [137/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.566 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:31.566 [139/745] Generating lib/rte_bbdev_def with a custom command 00:01:31.566 [140/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.566 [141/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:31.566 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.826 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.826 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:31.826 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.826 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:31.826 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.826 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.826 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.826 [150/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.826 [151/745] Generating lib/rte_bpf_def with a custom command 00:01:31.826 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:01:31.826 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.826 [154/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.826 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.090 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:32.090 [157/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:32.090 [158/745] Generating lib/rte_cfgfile_def with a custom command 00:01:32.090 [159/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.090 [160/745] Generating lib/rte_compressdev_def with a custom command 00:01:32.090 [161/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:32.090 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.090 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:01:32.090 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:32.090 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.090 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.090 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.090 [168/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.090 [169/745] Linking static target lib/librte_rcu.a 00:01:32.090 [170/745] Linking static target lib/librte_timer.a 00:01:32.090 [171/745] Generating lib/rte_distributor_def with a custom command 00:01:32.090 [172/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.090 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.090 [174/745] Linking static target lib/librte_cmdline.a 00:01:32.090 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:32.090 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.090 [177/745] Generating lib/rte_efd_def with a custom command 00:01:32.090 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:32.349 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.349 [180/745] Linking static target lib/librte_net.a 00:01:32.349 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.349 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:32.349 [183/745] Linking static target lib/librte_metrics.a 00:01:32.349 [184/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.349 [185/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:32.349 [186/745] Linking static target lib/librte_mempool.a 00:01:32.349 [187/745] Linking static target lib/librte_cfgfile.a 00:01:32.612 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:32.612 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.612 [190/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.612 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.612 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.612 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:32.612 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.884 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:32.884 [196/745] Linking static target lib/librte_eal.a 00:01:32.884 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:32.884 [198/745] Generating lib/rte_gpudev_def with a custom command 00:01:32.884 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:32.884 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:32.884 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:32.884 [202/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:32.884 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:32.884 [204/745] Linking static target lib/librte_bitratestats.a 00:01:32.884 [205/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:32.884 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:32.884 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.884 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.152 [209/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:33.152 [210/745] Generating lib/rte_gro_def with a custom command 00:01:33.152 [211/745] Generating lib/rte_gro_mingw with a custom command 00:01:33.152 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:33.152 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:33.152 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:33.416 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:33.416 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.416 [217/745] Generating lib/rte_gso_def with a custom command 00:01:33.416 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:33.416 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:33.416 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:33.416 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:33.416 [222/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:33.416 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:33.416 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:33.416 [225/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.416 [226/745] Linking static target lib/librte_bbdev.a 00:01:33.676 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:33.676 [228/745] Generating lib/rte_ip_frag_def with a custom command 00:01:33.676 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:33.676 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:33.676 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:33.676 [232/745] Generating lib/rte_latencystats_def with a custom command 00:01:33.676 [233/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:33.676 [234/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.676 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:33.676 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:33.676 [237/745] Linking static target lib/librte_compressdev.a 00:01:33.676 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:01:33.676 [239/745] Generating lib/rte_lpm_def with a custom command 00:01:33.939 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:33.939 [241/745] Linking static target lib/librte_jobstats.a 00:01:33.939 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:33.939 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:33.939 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:34.200 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:34.200 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:34.200 [247/745] Linking static target lib/librte_distributor.a 00:01:34.200 [248/745] Generating lib/rte_member_def with a custom command 00:01:34.200 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:34.200 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:34.200 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:34.200 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.460 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:34.460 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:34.460 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:34.460 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:34.460 [257/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:34.460 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:34.460 [259/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:34.460 [260/745] Linking static target lib/librte_bpf.a 00:01:34.460 [261/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.460 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.460 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.460 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:34.460 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:34.460 [266/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.723 [267/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:34.723 [268/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:34.723 [269/745] Generating lib/rte_power_def with a custom command 00:01:34.723 [270/745] Linking static target lib/librte_gpudev.a 00:01:34.723 [271/745] Generating lib/rte_power_mingw with a custom command 00:01:34.723 [272/745] Generating lib/rte_rawdev_def with a custom command 00:01:34.723 [273/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:34.723 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:34.723 [275/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:34.723 [276/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:34.723 [277/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:34.723 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:34.723 [279/745] Linking static target lib/librte_gro.a 00:01:34.723 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:34.724 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:34.724 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:34.724 [283/745] Generating lib/rte_rib_def with a custom command 00:01:34.724 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:34.986 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:34.986 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:34.986 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:34.986 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:34.986 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.986 [290/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:34.986 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.986 [292/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:35.254 [293/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:35.254 [294/745] Generating lib/rte_sched_def with a custom command 00:01:35.254 [295/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.254 [296/745] Generating lib/rte_sched_mingw with a custom command 00:01:35.254 [297/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:35.254 [298/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:35.254 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:35.254 [300/745] Linking static target lib/librte_latencystats.a 00:01:35.254 [301/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:35.254 [302/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:35.254 [303/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:35.254 [304/745] Generating lib/rte_security_mingw with a custom command 00:01:35.254 [305/745] Generating lib/rte_security_def with a custom command 00:01:35.254 [306/745] Generating lib/rte_stack_def with a custom command 00:01:35.254 [307/745] Generating lib/rte_stack_mingw with a custom command 00:01:35.254 [308/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:35.254 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:35.254 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:35.254 [311/745] Linking static target lib/librte_rawdev.a 00:01:35.254 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:35.254 [313/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:35.254 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:35.254 [315/745] Linking static target lib/librte_stack.a 00:01:35.254 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:35.534 [317/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:35.534 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:35.534 [319/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:35.534 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:35.534 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.534 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.534 [323/745] Linking static target lib/librte_dmadev.a 00:01:35.534 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:35.535 [325/745] Linking static target lib/librte_ip_frag.a 00:01:35.535 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:35.535 [327/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.535 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:35.819 [329/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:35.820 [330/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:35.820 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:35.820 [332/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.820 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:35.820 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.100 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.100 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:36.100 [337/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.100 [338/745] Generating lib/rte_fib_def with a custom command 00:01:36.100 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:36.100 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.100 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:36.100 [342/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.100 [343/745] Linking static target lib/librte_gso.a 00:01:36.100 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:36.100 [345/745] Linking static target lib/librte_regexdev.a 00:01:36.360 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.360 [347/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:36.360 [348/745] Linking static target lib/librte_efd.a 00:01:36.360 [349/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:36.360 [350/745] Linking static target lib/librte_pcapng.a 00:01:36.360 [351/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.360 [352/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:36.626 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:36.626 [354/745] Linking static target lib/librte_lpm.a 00:01:36.626 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:36.626 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:36.626 [357/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:36.626 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.893 [359/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.893 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:36.893 [361/745] Linking static target lib/librte_reorder.a 00:01:36.893 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.893 [363/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:36.893 [364/745] Generating lib/rte_port_def with a custom command 00:01:36.893 [365/745] Generating lib/rte_port_mingw with a custom command 00:01:36.893 [366/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.893 [367/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.893 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:36.893 [369/745] Generating lib/rte_pdump_mingw with a custom command 00:01:37.159 [370/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:37.159 [371/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:37.159 [372/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:37.159 [373/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:37.159 [374/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:37.159 [375/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:37.159 [376/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:37.159 [377/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.159 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.159 [379/745] Linking static target lib/acl/libavx2_tmp.a 00:01:37.159 [380/745] Linking static target lib/librte_security.a 00:01:37.159 [381/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:37.159 [382/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.159 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.159 [384/745] Linking static target lib/librte_power.a 00:01:37.159 [385/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.418 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:37.418 [387/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:37.418 [388/745] Linking static target lib/librte_hash.a 00:01:37.418 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.418 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:37.418 [391/745] Linking static target lib/librte_rib.a 00:01:37.681 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:37.681 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:37.681 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:01:37.681 [395/745] Linking static target lib/librte_acl.a 00:01:37.681 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:37.681 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:37.681 [398/745] Generating lib/rte_table_def with a custom command 00:01:37.941 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:37.941 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.941 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:37.941 [402/745] Linking static target lib/librte_ethdev.a 00:01:38.213 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.213 [404/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:38.213 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.213 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:38.213 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:38.213 [408/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.213 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.478 [410/745] Linking static target lib/librte_mbuf.a 00:01:38.478 [411/745] Generating lib/rte_pipeline_def with a custom command 00:01:38.478 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:38.478 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:38.478 [414/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:38.478 [415/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:38.478 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:38.478 [417/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:38.478 [418/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:38.478 [419/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.478 [420/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:38.478 [421/745] Generating lib/rte_graph_def with a custom command 00:01:38.478 [422/745] Linking static target lib/librte_fib.a 00:01:38.478 [423/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:38.478 [424/745] Generating lib/rte_graph_mingw with a custom command 00:01:38.743 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:38.743 [426/745] Linking static target lib/librte_eventdev.a 00:01:38.743 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:38.743 [428/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:38.743 [429/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.743 [430/745] Linking static target lib/librte_member.a 00:01:38.743 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:38.743 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:38.743 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:38.743 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:38.743 [435/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:39.002 [436/745] Generating lib/rte_node_def with a custom command 00:01:39.002 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:39.002 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:39.002 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:39.002 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.002 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.002 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:39.002 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:39.002 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.002 [445/745] Linking static target lib/librte_sched.a 00:01:39.266 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:39.266 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:39.266 [448/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.266 [449/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.266 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.266 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:39.266 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.266 [453/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:39.266 [454/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:39.266 [455/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:39.266 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:39.266 [457/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:39.266 [458/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:39.525 [459/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.525 [460/745] Linking static target lib/librte_cryptodev.a 00:01:39.525 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:39.525 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.525 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:39.525 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.525 [465/745] Linking static target lib/librte_pdump.a 00:01:39.525 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:39.525 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:39.787 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:39.787 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.787 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:39.787 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.787 [472/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:39.787 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:39.787 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:39.787 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.787 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.787 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:40.047 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:40.047 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:40.047 [480/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:40.047 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:40.047 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:40.047 [483/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.047 [484/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.047 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.047 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:40.047 [487/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.047 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:40.311 [489/745] Linking static target lib/librte_table.a 00:01:40.311 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:40.311 [491/745] Linking static target lib/librte_ipsec.a 00:01:40.311 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:40.311 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.311 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.570 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.570 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:40.570 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:40.570 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:40.835 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:40.835 [500/745] Linking static target lib/librte_graph.a 00:01:40.835 [501/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:40.835 [502/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.835 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:40.835 [504/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:40.835 [505/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.835 [506/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.835 [507/745] Linking static target drivers/librte_bus_pci.a 00:01:40.835 [508/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:40.835 [509/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:40.835 [510/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.835 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:40.835 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:41.096 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:41.357 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.357 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:41.618 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.618 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:41.618 [518/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.883 [519/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:41.883 [520/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:41.883 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:41.883 [522/745] Linking static target lib/librte_port.a 00:01:41.883 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.883 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.883 [525/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:41.883 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:42.150 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.413 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:42.413 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:42.413 [530/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:42.413 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.413 [532/745] Linking static target drivers/librte_mempool_ring.a 00:01:42.413 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.413 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:42.413 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:42.413 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:42.413 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:42.678 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:42.678 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:42.678 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.940 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.940 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:42.940 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:43.204 [544/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:43.204 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:43.464 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:43.464 [547/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:43.464 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:43.464 [549/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:43.725 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:43.725 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:43.725 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:43.725 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:43.986 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:43.986 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:43.986 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:44.253 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:44.253 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:44.253 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:44.517 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:44.517 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:44.517 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:44.781 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:44.781 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:44.781 [565/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:44.781 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:44.781 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:44.781 [568/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:44.781 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:44.781 [570/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:45.045 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:45.045 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:45.305 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:45.305 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:45.305 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:45.306 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:45.571 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:45.571 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:45.571 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:45.571 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:45.571 [581/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:45.572 [582/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.572 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:45.830 [584/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:45.830 [585/745] Linking target lib/librte_eal.so.23.0 00:01:45.830 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:45.830 [587/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.090 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:46.090 [589/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:46.090 [590/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:46.090 [591/745] Linking target lib/librte_meter.so.23.0 00:01:46.090 [592/745] Linking target lib/librte_ring.so.23.0 00:01:46.090 [593/745] Linking target lib/librte_pci.so.23.0 00:01:46.356 [594/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:46.356 [595/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:46.356 [596/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:46.356 [597/745] Linking target lib/librte_timer.so.23.0 00:01:46.356 [598/745] Linking target lib/librte_rcu.so.23.0 00:01:46.356 [599/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:46.356 [600/745] Linking target lib/librte_mempool.so.23.0 00:01:46.620 [601/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:46.620 [602/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:46.620 [603/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:46.620 [604/745] Linking target lib/librte_acl.so.23.0 00:01:46.620 [605/745] Linking target lib/librte_cfgfile.so.23.0 00:01:46.620 [606/745] Linking target lib/librte_jobstats.so.23.0 00:01:46.620 [607/745] Linking target lib/librte_rawdev.so.23.0 00:01:46.620 [608/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:46.620 [609/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:46.620 [610/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:46.620 [611/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:46.620 [612/745] Linking target lib/librte_stack.so.23.0 00:01:46.620 [613/745] Linking target lib/librte_graph.so.23.0 00:01:46.620 [614/745] Linking target lib/librte_dmadev.so.23.0 00:01:46.620 [615/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:46.620 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:46.620 [617/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:46.620 [618/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:46.879 [619/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:46.879 [620/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:46.879 [621/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:46.879 [622/745] Linking target lib/librte_rib.so.23.0 00:01:46.879 [623/745] Linking target lib/librte_mbuf.so.23.0 00:01:46.879 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:46.879 [625/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:46.879 [626/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:46.879 [627/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:46.879 [628/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:46.879 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:46.879 [630/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:46.879 [631/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:47.138 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:47.138 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:47.138 [634/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:47.138 [635/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:47.138 [636/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:47.138 [637/745] Linking target lib/librte_gpudev.so.23.0 00:01:47.138 [638/745] Linking target lib/librte_compressdev.so.23.0 00:01:47.138 [639/745] Linking target lib/librte_bbdev.so.23.0 00:01:47.138 [640/745] Linking target lib/librte_net.so.23.0 00:01:47.138 [641/745] Linking target lib/librte_distributor.so.23.0 00:01:47.138 [642/745] Linking target lib/librte_regexdev.so.23.0 00:01:47.138 [643/745] Linking target lib/librte_reorder.so.23.0 00:01:47.138 [644/745] Linking target lib/librte_sched.so.23.0 00:01:47.138 [645/745] Linking target lib/librte_fib.so.23.0 00:01:47.138 [646/745] Linking target lib/librte_cryptodev.so.23.0 00:01:47.138 [647/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:47.138 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:47.138 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:47.138 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:47.398 [651/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:47.398 [652/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:47.398 [653/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:47.398 [654/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:47.398 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:47.398 [656/745] Linking target lib/librte_security.so.23.0 00:01:47.398 [657/745] Linking target lib/librte_cmdline.so.23.0 00:01:47.398 [658/745] Linking target lib/librte_hash.so.23.0 00:01:47.398 [659/745] Linking target lib/librte_ethdev.so.23.0 00:01:47.398 [660/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:47.398 [661/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:47.398 [662/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:47.657 [663/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:47.657 [664/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:47.657 [665/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:47.657 [666/745] Linking target lib/librte_efd.so.23.0 00:01:47.657 [667/745] Linking target lib/librte_ipsec.so.23.0 00:01:47.657 [668/745] Linking target lib/librte_member.so.23.0 00:01:47.657 [669/745] Linking target lib/librte_lpm.so.23.0 00:01:47.657 [670/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:47.657 [671/745] Linking target lib/librte_gso.so.23.0 00:01:47.657 [672/745] Linking target lib/librte_pcapng.so.23.0 00:01:47.657 [673/745] Linking target lib/librte_gro.so.23.0 00:01:47.657 [674/745] Linking target lib/librte_bpf.so.23.0 00:01:47.657 [675/745] Linking target lib/librte_metrics.so.23.0 00:01:47.657 [676/745] Linking target lib/librte_ip_frag.so.23.0 00:01:47.657 [677/745] Linking target lib/librte_power.so.23.0 00:01:47.657 [678/745] Linking target lib/librte_eventdev.so.23.0 00:01:47.657 [679/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:47.915 [680/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:47.915 [681/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:47.915 [682/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:47.915 [683/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:47.915 [684/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:47.915 [685/745] Linking target lib/librte_bitratestats.so.23.0 00:01:47.915 [686/745] Linking target lib/librte_latencystats.so.23.0 00:01:47.915 [687/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:47.915 [688/745] Linking target lib/librte_pdump.so.23.0 00:01:47.915 [689/745] Linking target lib/librte_port.so.23.0 00:01:47.915 [690/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:47.915 [691/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:47.915 [692/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:48.173 [693/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:48.173 [694/745] Linking target lib/librte_table.so.23.0 00:01:48.173 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:48.431 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:48.431 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:48.431 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:48.689 [699/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:48.689 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:48.948 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:48.948 [702/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:48.948 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:49.209 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:49.209 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:49.209 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:49.209 [707/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:49.209 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:49.505 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:49.505 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:49.763 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.763 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:51.137 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:51.137 [714/745] Linking static target lib/librte_node.a 00:01:51.137 [715/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:51.137 [716/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.137 [717/745] Linking target lib/librte_node.so.23.0 00:01:52.072 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:52.331 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:00.441 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.507 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.507 [722/745] Linking static target lib/librte_vhost.a 00:02:32.507 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.507 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:42.481 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:42.481 [726/745] Linking static target lib/librte_pipeline.a 00:02:43.053 [727/745] Linking target app/dpdk-dumpcap 00:02:43.053 [728/745] Linking target app/dpdk-test-pipeline 00:02:43.053 [729/745] Linking target app/dpdk-test-flow-perf 00:02:43.053 [730/745] Linking target app/dpdk-test-regex 00:02:43.053 [731/745] Linking target app/dpdk-test-security-perf 00:02:43.053 [732/745] Linking target app/dpdk-test-bbdev 00:02:43.053 [733/745] Linking target app/dpdk-test-compress-perf 00:02:43.053 [734/745] Linking target app/dpdk-test-eventdev 00:02:43.053 [735/745] Linking target app/dpdk-test-fib 00:02:43.053 [736/745] Linking target app/dpdk-test-sad 00:02:43.053 [737/745] Linking target app/dpdk-test-acl 00:02:43.053 [738/745] Linking target app/dpdk-test-gpudev 00:02:43.053 [739/745] Linking target app/dpdk-pdump 00:02:43.053 [740/745] Linking target app/dpdk-proc-info 00:02:43.053 [741/745] Linking target app/dpdk-test-crypto-perf 00:02:43.053 [742/745] Linking target app/dpdk-test-cmdline 00:02:43.053 [743/745] Linking target app/dpdk-testpmd 00:02:44.953 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.953 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:44.953 16:23:52 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:44.953 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:44.953 [0/1] Installing files. 00:02:45.216 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:45.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:45.222 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.222 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.481 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.481 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.481 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.481 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.481 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.481 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.482 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.743 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.744 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.744 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.744 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.744 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.744 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:45.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:45.747 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:45.747 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:45.747 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:45.747 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:45.747 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:45.747 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:45.747 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:45.747 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:45.747 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:45.747 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:45.747 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:45.747 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:45.748 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:45.748 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:45.748 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:45.748 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:45.748 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:45.748 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:45.748 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:45.748 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:45.748 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:45.748 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:45.748 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:45.748 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:45.748 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:45.748 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:45.748 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:45.748 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:45.748 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:45.748 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:45.748 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:45.748 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:45.748 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:45.748 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:45.748 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:45.748 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:45.748 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:45.748 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:45.748 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:45.748 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:45.748 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:45.748 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:45.748 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:45.748 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:45.748 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:45.748 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:45.748 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:46.007 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:46.007 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:46.007 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:46.007 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:46.007 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:46.007 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:46.007 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:46.007 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:46.007 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:46.007 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:46.007 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:46.007 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:46.007 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:46.007 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:46.007 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:46.007 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:46.007 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:46.007 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:46.007 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:46.007 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:46.007 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:46.007 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:46.007 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:46.007 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:46.007 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:46.007 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:46.007 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:46.007 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:46.007 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:46.007 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:46.007 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:46.007 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:46.007 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:46.007 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:46.007 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:46.007 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:46.007 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:46.007 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:46.007 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:46.007 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:46.007 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:46.007 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:46.007 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:46.007 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:46.007 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:46.007 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:46.007 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:46.007 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:46.008 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:46.008 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:46.008 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:46.008 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:46.008 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:46.008 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:46.008 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:46.008 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:46.008 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:46.008 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:46.008 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:46.008 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:46.008 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:46.008 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:46.008 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:46.008 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:46.008 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:46.008 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:46.008 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:46.008 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:46.008 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:46.008 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:46.008 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:46.008 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:46.008 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:46.008 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:46.008 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:46.008 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:46.008 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:46.008 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:46.008 16:23:53 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:46.008 16:23:53 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:46.008 16:23:53 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:46.008 16:23:53 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.008 00:02:46.008 real 1m21.448s 00:02:46.008 user 14m30.539s 00:02:46.008 sys 1m49.507s 00:02:46.008 16:23:53 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:46.008 16:23:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:46.008 ************************************ 00:02:46.008 END TEST build_native_dpdk 00:02:46.008 ************************************ 00:02:46.008 16:23:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:46.008 16:23:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:46.008 16:23:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:46.008 16:23:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:46.008 16:23:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:46.008 16:23:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:46.008 16:23:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:46.008 16:23:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:46.008 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:46.008 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.008 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.265 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:46.522 Using 'verbs' RDMA provider 00:02:57.056 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:05.164 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:05.422 Creating mk/config.mk...done. 00:03:05.422 Creating mk/cc.flags.mk...done. 00:03:05.422 Type 'make' to build. 00:03:05.422 16:24:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:05.422 16:24:12 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:05.422 16:24:12 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:05.422 16:24:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.422 ************************************ 00:03:05.422 START TEST make 00:03:05.422 ************************************ 00:03:05.422 16:24:12 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:05.680 make[1]: Nothing to be done for 'all'. 00:03:07.594 The Meson build system 00:03:07.594 Version: 1.3.1 00:03:07.594 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:07.594 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.594 Build type: native build 00:03:07.594 Project name: libvfio-user 00:03:07.594 Project version: 0.0.1 00:03:07.594 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:07.594 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:07.594 Host machine cpu family: x86_64 00:03:07.594 Host machine cpu: x86_64 00:03:07.594 Run-time dependency threads found: YES 00:03:07.594 Library dl found: YES 00:03:07.594 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:07.594 Run-time dependency json-c found: YES 0.17 00:03:07.594 Run-time dependency cmocka found: YES 1.1.7 00:03:07.594 Program pytest-3 found: NO 00:03:07.594 Program flake8 found: NO 00:03:07.594 Program misspell-fixer found: NO 00:03:07.594 Program restructuredtext-lint found: NO 00:03:07.594 Program valgrind found: YES (/usr/bin/valgrind) 00:03:07.594 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.594 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.594 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.594 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.594 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:07.594 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:07.594 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.594 Build targets in project: 8 00:03:07.594 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:07.594 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:07.594 00:03:07.594 libvfio-user 0.0.1 00:03:07.594 00:03:07.594 User defined options 00:03:07.594 buildtype : debug 00:03:07.594 default_library: shared 00:03:07.594 libdir : /usr/local/lib 00:03:07.594 00:03:07.594 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.175 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.175 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.175 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.175 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:08.175 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:08.175 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:08.175 [6/37] Compiling C object samples/null.p/null.c.o 00:03:08.446 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.446 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:08.446 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.446 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:08.446 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.446 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.446 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.446 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:08.446 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.446 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.446 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.446 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.446 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.446 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:08.446 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.446 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.446 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.446 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.446 [25/37] Compiling C object samples/server.p/server.c.o 00:03:08.446 [26/37] Compiling C object samples/client.p/client.c.o 00:03:08.446 [27/37] Linking target samples/client 00:03:08.446 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.707 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:08.707 [30/37] Linking target test/unit_tests 00:03:08.707 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.969 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:08.969 [33/37] Linking target samples/lspci 00:03:08.969 [34/37] Linking target samples/gpio-pci-idio-16 00:03:08.969 [35/37] Linking target samples/null 00:03:08.969 [36/37] Linking target samples/server 00:03:08.969 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:08.969 INFO: autodetecting backend as ninja 00:03:08.969 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.969 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.945 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.945 ninja: no work to do. 00:03:22.156 CC lib/ut_mock/mock.o 00:03:22.156 CC lib/log/log.o 00:03:22.156 CC lib/log/log_flags.o 00:03:22.156 CC lib/ut/ut.o 00:03:22.156 CC lib/log/log_deprecated.o 00:03:22.156 LIB libspdk_ut_mock.a 00:03:22.156 SO libspdk_ut_mock.so.6.0 00:03:22.156 LIB libspdk_log.a 00:03:22.156 LIB libspdk_ut.a 00:03:22.156 SO libspdk_ut.so.2.0 00:03:22.156 SO libspdk_log.so.7.0 00:03:22.156 SYMLINK libspdk_ut_mock.so 00:03:22.156 SYMLINK libspdk_ut.so 00:03:22.156 SYMLINK libspdk_log.so 00:03:22.156 CXX lib/trace_parser/trace.o 00:03:22.156 CC lib/dma/dma.o 00:03:22.156 CC lib/ioat/ioat.o 00:03:22.156 CC lib/util/base64.o 00:03:22.156 CC lib/util/bit_array.o 00:03:22.156 CC lib/util/cpuset.o 00:03:22.156 CC lib/util/crc16.o 00:03:22.156 CC lib/util/crc32.o 00:03:22.156 CC lib/util/crc32c.o 00:03:22.156 CC lib/util/crc32_ieee.o 00:03:22.156 CC lib/util/crc64.o 00:03:22.156 CC lib/util/dif.o 00:03:22.156 CC lib/util/fd.o 00:03:22.156 CC lib/util/file.o 00:03:22.156 CC lib/util/hexlify.o 00:03:22.156 CC lib/util/iov.o 00:03:22.156 CC lib/util/math.o 00:03:22.156 CC lib/util/pipe.o 00:03:22.156 CC lib/util/strerror_tls.o 00:03:22.156 CC lib/util/string.o 00:03:22.156 CC lib/util/uuid.o 00:03:22.156 CC lib/util/fd_group.o 00:03:22.156 CC lib/util/xor.o 00:03:22.156 CC lib/util/zipf.o 00:03:22.156 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.156 CC lib/vfio_user/host/vfio_user.o 00:03:22.156 LIB libspdk_dma.a 00:03:22.156 SO libspdk_dma.so.4.0 00:03:22.156 LIB libspdk_ioat.a 00:03:22.156 SYMLINK libspdk_dma.so 00:03:22.156 SO libspdk_ioat.so.7.0 00:03:22.156 SYMLINK libspdk_ioat.so 00:03:22.157 LIB libspdk_vfio_user.a 00:03:22.157 SO libspdk_vfio_user.so.5.0 00:03:22.157 SYMLINK libspdk_vfio_user.so 00:03:22.157 LIB libspdk_util.a 00:03:22.157 SO libspdk_util.so.9.0 00:03:22.416 SYMLINK libspdk_util.so 00:03:22.416 CC lib/json/json_parse.o 00:03:22.416 CC lib/env_dpdk/env.o 00:03:22.416 CC lib/idxd/idxd.o 00:03:22.416 CC lib/json/json_util.o 00:03:22.416 CC lib/conf/conf.o 00:03:22.416 CC lib/rdma/common.o 00:03:22.416 CC lib/env_dpdk/memory.o 00:03:22.416 CC lib/vmd/vmd.o 00:03:22.416 CC lib/json/json_write.o 00:03:22.416 CC lib/idxd/idxd_user.o 00:03:22.416 CC lib/env_dpdk/pci.o 00:03:22.416 CC lib/rdma/rdma_verbs.o 00:03:22.416 CC lib/vmd/led.o 00:03:22.416 CC lib/env_dpdk/init.o 00:03:22.416 CC lib/env_dpdk/threads.o 00:03:22.416 CC lib/env_dpdk/pci_ioat.o 00:03:22.416 CC lib/env_dpdk/pci_virtio.o 00:03:22.416 CC lib/env_dpdk/pci_vmd.o 00:03:22.416 CC lib/env_dpdk/pci_idxd.o 00:03:22.416 CC lib/env_dpdk/pci_event.o 00:03:22.416 CC lib/env_dpdk/sigbus_handler.o 00:03:22.416 CC lib/env_dpdk/pci_dpdk.o 00:03:22.416 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.416 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.416 LIB libspdk_trace_parser.a 00:03:22.416 SO libspdk_trace_parser.so.5.0 00:03:22.674 SYMLINK libspdk_trace_parser.so 00:03:22.674 LIB libspdk_conf.a 00:03:22.674 SO libspdk_conf.so.6.0 00:03:22.674 LIB libspdk_json.a 00:03:22.933 LIB libspdk_rdma.a 00:03:22.933 SO libspdk_json.so.6.0 00:03:22.933 SYMLINK libspdk_conf.so 00:03:22.933 SO libspdk_rdma.so.6.0 00:03:22.933 SYMLINK libspdk_json.so 00:03:22.933 SYMLINK libspdk_rdma.so 00:03:22.933 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.933 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.933 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.933 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:23.191 LIB libspdk_idxd.a 00:03:23.191 SO libspdk_idxd.so.12.0 00:03:23.191 LIB libspdk_vmd.a 00:03:23.191 SYMLINK libspdk_idxd.so 00:03:23.191 SO libspdk_vmd.so.6.0 00:03:23.191 SYMLINK libspdk_vmd.so 00:03:23.191 LIB libspdk_jsonrpc.a 00:03:23.450 SO libspdk_jsonrpc.so.6.0 00:03:23.450 SYMLINK libspdk_jsonrpc.so 00:03:23.709 CC lib/rpc/rpc.o 00:03:23.709 LIB libspdk_rpc.a 00:03:23.709 SO libspdk_rpc.so.6.0 00:03:23.967 SYMLINK libspdk_rpc.so 00:03:23.967 CC lib/notify/notify.o 00:03:23.967 CC lib/notify/notify_rpc.o 00:03:23.967 CC lib/keyring/keyring.o 00:03:23.967 CC lib/keyring/keyring_rpc.o 00:03:23.967 CC lib/trace/trace.o 00:03:23.967 CC lib/trace/trace_rpc.o 00:03:23.967 CC lib/trace/trace_flags.o 00:03:24.226 LIB libspdk_notify.a 00:03:24.226 SO libspdk_notify.so.6.0 00:03:24.226 LIB libspdk_keyring.a 00:03:24.226 SYMLINK libspdk_notify.so 00:03:24.226 LIB libspdk_trace.a 00:03:24.226 SO libspdk_keyring.so.1.0 00:03:24.226 SO libspdk_trace.so.10.0 00:03:24.226 SYMLINK libspdk_keyring.so 00:03:24.484 SYMLINK libspdk_trace.so 00:03:24.484 LIB libspdk_env_dpdk.a 00:03:24.484 CC lib/sock/sock.o 00:03:24.484 CC lib/thread/thread.o 00:03:24.484 CC lib/sock/sock_rpc.o 00:03:24.484 CC lib/thread/iobuf.o 00:03:24.484 SO libspdk_env_dpdk.so.14.0 00:03:24.743 SYMLINK libspdk_env_dpdk.so 00:03:25.001 LIB libspdk_sock.a 00:03:25.001 SO libspdk_sock.so.9.0 00:03:25.001 SYMLINK libspdk_sock.so 00:03:25.260 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:25.260 CC lib/nvme/nvme_ctrlr.o 00:03:25.260 CC lib/nvme/nvme_fabric.o 00:03:25.260 CC lib/nvme/nvme_ns_cmd.o 00:03:25.260 CC lib/nvme/nvme_ns.o 00:03:25.260 CC lib/nvme/nvme_pcie_common.o 00:03:25.260 CC lib/nvme/nvme_pcie.o 00:03:25.260 CC lib/nvme/nvme_qpair.o 00:03:25.260 CC lib/nvme/nvme.o 00:03:25.260 CC lib/nvme/nvme_quirks.o 00:03:25.260 CC lib/nvme/nvme_transport.o 00:03:25.260 CC lib/nvme/nvme_discovery.o 00:03:25.260 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:25.260 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:25.260 CC lib/nvme/nvme_tcp.o 00:03:25.260 CC lib/nvme/nvme_opal.o 00:03:25.260 CC lib/nvme/nvme_io_msg.o 00:03:25.260 CC lib/nvme/nvme_poll_group.o 00:03:25.260 CC lib/nvme/nvme_zns.o 00:03:25.260 CC lib/nvme/nvme_stubs.o 00:03:25.260 CC lib/nvme/nvme_auth.o 00:03:25.260 CC lib/nvme/nvme_cuse.o 00:03:25.260 CC lib/nvme/nvme_vfio_user.o 00:03:25.260 CC lib/nvme/nvme_rdma.o 00:03:26.200 LIB libspdk_thread.a 00:03:26.200 SO libspdk_thread.so.10.0 00:03:26.200 SYMLINK libspdk_thread.so 00:03:26.458 CC lib/vfu_tgt/tgt_endpoint.o 00:03:26.458 CC lib/blob/blobstore.o 00:03:26.458 CC lib/init/json_config.o 00:03:26.458 CC lib/vfu_tgt/tgt_rpc.o 00:03:26.458 CC lib/blob/request.o 00:03:26.458 CC lib/init/subsystem.o 00:03:26.458 CC lib/init/subsystem_rpc.o 00:03:26.458 CC lib/blob/zeroes.o 00:03:26.458 CC lib/init/rpc.o 00:03:26.458 CC lib/blob/blob_bs_dev.o 00:03:26.458 CC lib/accel/accel.o 00:03:26.458 CC lib/virtio/virtio.o 00:03:26.458 CC lib/virtio/virtio_vhost_user.o 00:03:26.458 CC lib/accel/accel_rpc.o 00:03:26.458 CC lib/accel/accel_sw.o 00:03:26.458 CC lib/virtio/virtio_vfio_user.o 00:03:26.458 CC lib/virtio/virtio_pci.o 00:03:26.715 LIB libspdk_init.a 00:03:26.715 SO libspdk_init.so.5.0 00:03:26.715 LIB libspdk_vfu_tgt.a 00:03:26.715 LIB libspdk_virtio.a 00:03:26.715 SYMLINK libspdk_init.so 00:03:26.715 SO libspdk_vfu_tgt.so.3.0 00:03:26.715 SO libspdk_virtio.so.7.0 00:03:26.715 SYMLINK libspdk_vfu_tgt.so 00:03:26.715 SYMLINK libspdk_virtio.so 00:03:26.974 CC lib/event/app.o 00:03:26.974 CC lib/event/reactor.o 00:03:26.974 CC lib/event/log_rpc.o 00:03:26.974 CC lib/event/app_rpc.o 00:03:26.974 CC lib/event/scheduler_static.o 00:03:27.232 LIB libspdk_event.a 00:03:27.232 SO libspdk_event.so.13.0 00:03:27.490 LIB libspdk_accel.a 00:03:27.490 SYMLINK libspdk_event.so 00:03:27.490 SO libspdk_accel.so.15.0 00:03:27.490 SYMLINK libspdk_accel.so 00:03:27.490 LIB libspdk_nvme.a 00:03:27.748 CC lib/bdev/bdev.o 00:03:27.748 CC lib/bdev/bdev_rpc.o 00:03:27.748 CC lib/bdev/bdev_zone.o 00:03:27.748 CC lib/bdev/part.o 00:03:27.748 CC lib/bdev/scsi_nvme.o 00:03:27.748 SO libspdk_nvme.so.13.0 00:03:28.007 SYMLINK libspdk_nvme.so 00:03:29.382 LIB libspdk_blob.a 00:03:29.382 SO libspdk_blob.so.11.0 00:03:29.382 SYMLINK libspdk_blob.so 00:03:29.640 CC lib/lvol/lvol.o 00:03:29.640 CC lib/blobfs/blobfs.o 00:03:29.640 CC lib/blobfs/tree.o 00:03:30.206 LIB libspdk_bdev.a 00:03:30.206 SO libspdk_bdev.so.15.0 00:03:30.206 LIB libspdk_blobfs.a 00:03:30.474 SO libspdk_blobfs.so.10.0 00:03:30.474 SYMLINK libspdk_bdev.so 00:03:30.474 LIB libspdk_lvol.a 00:03:30.474 SYMLINK libspdk_blobfs.so 00:03:30.474 SO libspdk_lvol.so.10.0 00:03:30.474 SYMLINK libspdk_lvol.so 00:03:30.474 CC lib/nbd/nbd.o 00:03:30.474 CC lib/ftl/ftl_core.o 00:03:30.474 CC lib/nbd/nbd_rpc.o 00:03:30.474 CC lib/ublk/ublk.o 00:03:30.474 CC lib/scsi/dev.o 00:03:30.474 CC lib/ftl/ftl_init.o 00:03:30.474 CC lib/nvmf/ctrlr.o 00:03:30.474 CC lib/ftl/ftl_layout.o 00:03:30.474 CC lib/scsi/lun.o 00:03:30.474 CC lib/ublk/ublk_rpc.o 00:03:30.474 CC lib/nvmf/ctrlr_discovery.o 00:03:30.474 CC lib/ftl/ftl_debug.o 00:03:30.474 CC lib/scsi/port.o 00:03:30.474 CC lib/nvmf/ctrlr_bdev.o 00:03:30.474 CC lib/ftl/ftl_io.o 00:03:30.474 CC lib/nvmf/subsystem.o 00:03:30.474 CC lib/scsi/scsi.o 00:03:30.474 CC lib/ftl/ftl_sb.o 00:03:30.474 CC lib/scsi/scsi_bdev.o 00:03:30.474 CC lib/nvmf/nvmf.o 00:03:30.474 CC lib/scsi/scsi_pr.o 00:03:30.474 CC lib/nvmf/nvmf_rpc.o 00:03:30.474 CC lib/ftl/ftl_l2p.o 00:03:30.474 CC lib/ftl/ftl_l2p_flat.o 00:03:30.474 CC lib/scsi/scsi_rpc.o 00:03:30.474 CC lib/nvmf/transport.o 00:03:30.474 CC lib/ftl/ftl_nv_cache.o 00:03:30.474 CC lib/ftl/ftl_band.o 00:03:30.474 CC lib/nvmf/tcp.o 00:03:30.474 CC lib/scsi/task.o 00:03:30.474 CC lib/nvmf/mdns_server.o 00:03:30.474 CC lib/nvmf/stubs.o 00:03:30.474 CC lib/ftl/ftl_band_ops.o 00:03:30.474 CC lib/ftl/ftl_writer.o 00:03:30.474 CC lib/ftl/ftl_rq.o 00:03:30.474 CC lib/nvmf/vfio_user.o 00:03:30.474 CC lib/ftl/ftl_reloc.o 00:03:30.474 CC lib/nvmf/rdma.o 00:03:30.474 CC lib/ftl/ftl_p2l.o 00:03:30.474 CC lib/ftl/ftl_l2p_cache.o 00:03:30.474 CC lib/nvmf/auth.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:30.474 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.044 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.044 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.044 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.044 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.044 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.044 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.044 CC lib/ftl/utils/ftl_conf.o 00:03:31.044 CC lib/ftl/utils/ftl_md.o 00:03:31.044 CC lib/ftl/utils/ftl_mempool.o 00:03:31.044 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.044 CC lib/ftl/utils/ftl_property.o 00:03:31.044 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.044 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.044 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.044 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.044 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.044 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.044 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.044 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.044 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.306 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.306 CC lib/ftl/base/ftl_base_dev.o 00:03:31.306 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.306 CC lib/ftl/ftl_trace.o 00:03:31.306 LIB libspdk_nbd.a 00:03:31.306 SO libspdk_nbd.so.7.0 00:03:31.564 SYMLINK libspdk_nbd.so 00:03:31.564 LIB libspdk_scsi.a 00:03:31.564 SO libspdk_scsi.so.9.0 00:03:31.564 SYMLINK libspdk_scsi.so 00:03:31.564 LIB libspdk_ublk.a 00:03:31.564 SO libspdk_ublk.so.3.0 00:03:31.823 SYMLINK libspdk_ublk.so 00:03:31.823 CC lib/vhost/vhost.o 00:03:31.823 CC lib/iscsi/conn.o 00:03:31.823 CC lib/vhost/vhost_rpc.o 00:03:31.823 CC lib/iscsi/init_grp.o 00:03:31.823 CC lib/vhost/vhost_scsi.o 00:03:31.823 CC lib/iscsi/iscsi.o 00:03:31.823 CC lib/vhost/vhost_blk.o 00:03:31.823 CC lib/iscsi/md5.o 00:03:31.823 CC lib/vhost/rte_vhost_user.o 00:03:31.823 CC lib/iscsi/param.o 00:03:31.823 CC lib/iscsi/portal_grp.o 00:03:31.823 CC lib/iscsi/tgt_node.o 00:03:31.823 CC lib/iscsi/iscsi_subsystem.o 00:03:31.823 CC lib/iscsi/iscsi_rpc.o 00:03:31.823 CC lib/iscsi/task.o 00:03:31.823 LIB libspdk_ftl.a 00:03:32.081 SO libspdk_ftl.so.9.0 00:03:32.339 SYMLINK libspdk_ftl.so 00:03:32.905 LIB libspdk_vhost.a 00:03:33.162 SO libspdk_vhost.so.8.0 00:03:33.163 LIB libspdk_nvmf.a 00:03:33.163 SO libspdk_nvmf.so.18.0 00:03:33.163 SYMLINK libspdk_vhost.so 00:03:33.163 LIB libspdk_iscsi.a 00:03:33.447 SO libspdk_iscsi.so.8.0 00:03:33.447 SYMLINK libspdk_nvmf.so 00:03:33.447 SYMLINK libspdk_iscsi.so 00:03:33.705 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.705 CC module/vfu_device/vfu_virtio.o 00:03:33.705 CC module/vfu_device/vfu_virtio_blk.o 00:03:33.705 CC module/vfu_device/vfu_virtio_scsi.o 00:03:33.705 CC module/vfu_device/vfu_virtio_rpc.o 00:03:33.705 CC module/accel/dsa/accel_dsa.o 00:03:33.705 CC module/blob/bdev/blob_bdev.o 00:03:33.705 CC module/sock/posix/posix.o 00:03:33.705 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.705 CC module/keyring/file/keyring.o 00:03:33.705 CC module/accel/iaa/accel_iaa.o 00:03:33.705 CC module/keyring/file/keyring_rpc.o 00:03:33.705 CC module/accel/error/accel_error.o 00:03:33.705 CC module/accel/error/accel_error_rpc.o 00:03:33.705 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.705 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.705 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.705 CC module/accel/ioat/accel_ioat.o 00:03:33.705 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.705 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.963 LIB libspdk_env_dpdk_rpc.a 00:03:33.963 SO libspdk_env_dpdk_rpc.so.6.0 00:03:33.963 SYMLINK libspdk_env_dpdk_rpc.so 00:03:33.963 LIB libspdk_keyring_file.a 00:03:33.963 LIB libspdk_scheduler_gscheduler.a 00:03:33.963 LIB libspdk_scheduler_dpdk_governor.a 00:03:33.963 SO libspdk_scheduler_gscheduler.so.4.0 00:03:33.963 SO libspdk_keyring_file.so.1.0 00:03:33.963 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:33.963 LIB libspdk_accel_error.a 00:03:33.963 LIB libspdk_accel_ioat.a 00:03:33.963 LIB libspdk_scheduler_dynamic.a 00:03:33.963 LIB libspdk_accel_iaa.a 00:03:33.963 SO libspdk_accel_error.so.2.0 00:03:34.233 SO libspdk_scheduler_dynamic.so.4.0 00:03:34.233 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.233 SO libspdk_accel_ioat.so.6.0 00:03:34.233 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.233 SYMLINK libspdk_keyring_file.so 00:03:34.233 SO libspdk_accel_iaa.so.3.0 00:03:34.233 LIB libspdk_accel_dsa.a 00:03:34.233 SYMLINK libspdk_accel_error.so 00:03:34.233 SO libspdk_accel_dsa.so.5.0 00:03:34.233 LIB libspdk_blob_bdev.a 00:03:34.233 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.233 SYMLINK libspdk_accel_ioat.so 00:03:34.233 SYMLINK libspdk_accel_iaa.so 00:03:34.233 SO libspdk_blob_bdev.so.11.0 00:03:34.233 SYMLINK libspdk_accel_dsa.so 00:03:34.233 SYMLINK libspdk_blob_bdev.so 00:03:34.493 LIB libspdk_vfu_device.a 00:03:34.493 SO libspdk_vfu_device.so.3.0 00:03:34.493 CC module/bdev/null/bdev_null.o 00:03:34.493 CC module/bdev/split/vbdev_split.o 00:03:34.493 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.493 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.493 CC module/bdev/null/bdev_null_rpc.o 00:03:34.493 CC module/bdev/malloc/bdev_malloc.o 00:03:34.493 CC module/bdev/aio/bdev_aio.o 00:03:34.493 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.493 CC module/bdev/delay/vbdev_delay.o 00:03:34.494 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.494 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.494 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.494 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.494 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.494 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.494 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.494 CC module/bdev/nvme/bdev_nvme.o 00:03:34.494 CC module/bdev/ftl/bdev_ftl.o 00:03:34.494 CC module/bdev/error/vbdev_error.o 00:03:34.494 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.494 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.494 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.494 CC module/bdev/nvme/nvme_rpc.o 00:03:34.494 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.494 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.494 CC module/bdev/gpt/gpt.o 00:03:34.494 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.494 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.494 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.494 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.494 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.494 CC module/bdev/raid/bdev_raid.o 00:03:34.494 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.494 CC module/bdev/nvme/vbdev_opal.o 00:03:34.494 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.494 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.494 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.494 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.494 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.494 CC module/bdev/raid/raid0.o 00:03:34.494 CC module/bdev/raid/raid1.o 00:03:34.494 CC module/bdev/raid/concat.o 00:03:34.494 SYMLINK libspdk_vfu_device.so 00:03:34.752 LIB libspdk_sock_posix.a 00:03:34.752 SO libspdk_sock_posix.so.6.0 00:03:34.752 LIB libspdk_blobfs_bdev.a 00:03:35.010 SYMLINK libspdk_sock_posix.so 00:03:35.010 SO libspdk_blobfs_bdev.so.6.0 00:03:35.010 LIB libspdk_bdev_split.a 00:03:35.010 SO libspdk_bdev_split.so.6.0 00:03:35.010 LIB libspdk_bdev_null.a 00:03:35.010 SYMLINK libspdk_blobfs_bdev.so 00:03:35.010 LIB libspdk_bdev_gpt.a 00:03:35.010 SO libspdk_bdev_null.so.6.0 00:03:35.010 SYMLINK libspdk_bdev_split.so 00:03:35.010 SO libspdk_bdev_gpt.so.6.0 00:03:35.010 LIB libspdk_bdev_error.a 00:03:35.010 LIB libspdk_bdev_ftl.a 00:03:35.010 SYMLINK libspdk_bdev_null.so 00:03:35.010 LIB libspdk_bdev_malloc.a 00:03:35.010 LIB libspdk_bdev_passthru.a 00:03:35.010 SO libspdk_bdev_error.so.6.0 00:03:35.010 SYMLINK libspdk_bdev_gpt.so 00:03:35.010 LIB libspdk_bdev_aio.a 00:03:35.010 SO libspdk_bdev_ftl.so.6.0 00:03:35.010 LIB libspdk_bdev_zone_block.a 00:03:35.010 SO libspdk_bdev_malloc.so.6.0 00:03:35.010 SO libspdk_bdev_passthru.so.6.0 00:03:35.010 SO libspdk_bdev_aio.so.6.0 00:03:35.010 LIB libspdk_bdev_iscsi.a 00:03:35.010 SO libspdk_bdev_zone_block.so.6.0 00:03:35.010 SYMLINK libspdk_bdev_error.so 00:03:35.010 SYMLINK libspdk_bdev_ftl.so 00:03:35.010 SO libspdk_bdev_iscsi.so.6.0 00:03:35.010 LIB libspdk_bdev_delay.a 00:03:35.010 SYMLINK libspdk_bdev_passthru.so 00:03:35.010 SYMLINK libspdk_bdev_malloc.so 00:03:35.010 SYMLINK libspdk_bdev_aio.so 00:03:35.267 SYMLINK libspdk_bdev_zone_block.so 00:03:35.267 SO libspdk_bdev_delay.so.6.0 00:03:35.267 SYMLINK libspdk_bdev_iscsi.so 00:03:35.267 SYMLINK libspdk_bdev_delay.so 00:03:35.267 LIB libspdk_bdev_virtio.a 00:03:35.267 SO libspdk_bdev_virtio.so.6.0 00:03:35.267 LIB libspdk_bdev_lvol.a 00:03:35.267 SO libspdk_bdev_lvol.so.6.0 00:03:35.267 SYMLINK libspdk_bdev_virtio.so 00:03:35.267 SYMLINK libspdk_bdev_lvol.so 00:03:35.525 LIB libspdk_bdev_raid.a 00:03:35.525 SO libspdk_bdev_raid.so.6.0 00:03:35.783 SYMLINK libspdk_bdev_raid.so 00:03:36.716 LIB libspdk_bdev_nvme.a 00:03:36.974 SO libspdk_bdev_nvme.so.7.0 00:03:36.974 SYMLINK libspdk_bdev_nvme.so 00:03:37.233 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:37.233 CC module/event/subsystems/sock/sock.o 00:03:37.233 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.233 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.233 CC module/event/subsystems/keyring/keyring.o 00:03:37.233 CC module/event/subsystems/vmd/vmd.o 00:03:37.233 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.233 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.233 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.490 LIB libspdk_event_sock.a 00:03:37.490 LIB libspdk_event_keyring.a 00:03:37.490 LIB libspdk_event_vhost_blk.a 00:03:37.490 LIB libspdk_event_scheduler.a 00:03:37.490 LIB libspdk_event_vfu_tgt.a 00:03:37.490 LIB libspdk_event_vmd.a 00:03:37.490 SO libspdk_event_keyring.so.1.0 00:03:37.491 SO libspdk_event_sock.so.5.0 00:03:37.491 LIB libspdk_event_iobuf.a 00:03:37.491 SO libspdk_event_vhost_blk.so.3.0 00:03:37.491 SO libspdk_event_scheduler.so.4.0 00:03:37.491 SO libspdk_event_vfu_tgt.so.3.0 00:03:37.491 SO libspdk_event_vmd.so.6.0 00:03:37.491 SO libspdk_event_iobuf.so.3.0 00:03:37.491 SYMLINK libspdk_event_sock.so 00:03:37.491 SYMLINK libspdk_event_keyring.so 00:03:37.491 SYMLINK libspdk_event_vhost_blk.so 00:03:37.491 SYMLINK libspdk_event_scheduler.so 00:03:37.491 SYMLINK libspdk_event_vfu_tgt.so 00:03:37.491 SYMLINK libspdk_event_vmd.so 00:03:37.491 SYMLINK libspdk_event_iobuf.so 00:03:37.748 CC module/event/subsystems/accel/accel.o 00:03:38.006 LIB libspdk_event_accel.a 00:03:38.006 SO libspdk_event_accel.so.6.0 00:03:38.006 SYMLINK libspdk_event_accel.so 00:03:38.264 CC module/event/subsystems/bdev/bdev.o 00:03:38.264 LIB libspdk_event_bdev.a 00:03:38.264 SO libspdk_event_bdev.so.6.0 00:03:38.522 SYMLINK libspdk_event_bdev.so 00:03:38.522 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.522 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.522 CC module/event/subsystems/scsi/scsi.o 00:03:38.522 CC module/event/subsystems/ublk/ublk.o 00:03:38.522 CC module/event/subsystems/nbd/nbd.o 00:03:38.779 LIB libspdk_event_nbd.a 00:03:38.779 LIB libspdk_event_ublk.a 00:03:38.779 SO libspdk_event_nbd.so.6.0 00:03:38.779 LIB libspdk_event_scsi.a 00:03:38.779 SO libspdk_event_ublk.so.3.0 00:03:38.779 SO libspdk_event_scsi.so.6.0 00:03:38.779 SYMLINK libspdk_event_nbd.so 00:03:38.779 SYMLINK libspdk_event_ublk.so 00:03:38.779 LIB libspdk_event_nvmf.a 00:03:38.779 SYMLINK libspdk_event_scsi.so 00:03:38.780 SO libspdk_event_nvmf.so.6.0 00:03:39.037 SYMLINK libspdk_event_nvmf.so 00:03:39.037 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.037 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.037 LIB libspdk_event_vhost_scsi.a 00:03:39.294 LIB libspdk_event_iscsi.a 00:03:39.294 SO libspdk_event_vhost_scsi.so.3.0 00:03:39.294 SO libspdk_event_iscsi.so.6.0 00:03:39.294 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.294 SYMLINK libspdk_event_iscsi.so 00:03:39.294 SO libspdk.so.6.0 00:03:39.294 SYMLINK libspdk.so 00:03:39.553 CC app/trace_record/trace_record.o 00:03:39.553 CXX app/trace/trace.o 00:03:39.553 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.553 CC app/spdk_top/spdk_top.o 00:03:39.553 TEST_HEADER include/spdk/accel.h 00:03:39.553 CC app/spdk_nvme_identify/identify.o 00:03:39.553 CC app/spdk_lspci/spdk_lspci.o 00:03:39.553 TEST_HEADER include/spdk/accel_module.h 00:03:39.553 CC test/rpc_client/rpc_client_test.o 00:03:39.553 CC app/spdk_nvme_perf/perf.o 00:03:39.553 TEST_HEADER include/spdk/assert.h 00:03:39.553 TEST_HEADER include/spdk/barrier.h 00:03:39.553 TEST_HEADER include/spdk/base64.h 00:03:39.553 TEST_HEADER include/spdk/bdev.h 00:03:39.553 TEST_HEADER include/spdk/bdev_module.h 00:03:39.553 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.553 TEST_HEADER include/spdk/bit_array.h 00:03:39.553 TEST_HEADER include/spdk/bit_pool.h 00:03:39.553 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.553 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.553 TEST_HEADER include/spdk/blobfs.h 00:03:39.553 TEST_HEADER include/spdk/blob.h 00:03:39.553 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.553 TEST_HEADER include/spdk/conf.h 00:03:39.553 TEST_HEADER include/spdk/config.h 00:03:39.553 TEST_HEADER include/spdk/cpuset.h 00:03:39.553 CC app/spdk_dd/spdk_dd.o 00:03:39.553 TEST_HEADER include/spdk/crc16.h 00:03:39.553 TEST_HEADER include/spdk/crc32.h 00:03:39.553 TEST_HEADER include/spdk/crc64.h 00:03:39.553 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.553 TEST_HEADER include/spdk/dif.h 00:03:39.553 CC app/nvmf_tgt/nvmf_main.o 00:03:39.553 TEST_HEADER include/spdk/dma.h 00:03:39.553 TEST_HEADER include/spdk/endian.h 00:03:39.553 CC app/vhost/vhost.o 00:03:39.553 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.814 TEST_HEADER include/spdk/env.h 00:03:39.814 TEST_HEADER include/spdk/event.h 00:03:39.814 TEST_HEADER include/spdk/fd_group.h 00:03:39.814 TEST_HEADER include/spdk/fd.h 00:03:39.814 TEST_HEADER include/spdk/file.h 00:03:39.814 TEST_HEADER include/spdk/ftl.h 00:03:39.814 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.814 TEST_HEADER include/spdk/hexlify.h 00:03:39.814 TEST_HEADER include/spdk/histogram_data.h 00:03:39.814 TEST_HEADER include/spdk/idxd.h 00:03:39.814 CC examples/ioat/perf/perf.o 00:03:39.814 CC examples/ioat/verify/verify.o 00:03:39.814 CC examples/accel/perf/accel_perf.o 00:03:39.814 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.814 CC examples/nvme/hello_world/hello_world.o 00:03:39.814 CC app/spdk_tgt/spdk_tgt.o 00:03:39.814 TEST_HEADER include/spdk/init.h 00:03:39.814 CC examples/idxd/perf/perf.o 00:03:39.814 CC examples/util/zipf/zipf.o 00:03:39.814 TEST_HEADER include/spdk/ioat.h 00:03:39.814 CC examples/vmd/lsvmd/lsvmd.o 00:03:39.814 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.814 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.814 CC test/event/event_perf/event_perf.o 00:03:39.814 CC examples/sock/hello_world/hello_sock.o 00:03:39.814 TEST_HEADER include/spdk/json.h 00:03:39.814 CC examples/vmd/led/led.o 00:03:39.814 CC test/event/reactor_perf/reactor_perf.o 00:03:39.814 CC examples/nvme/hotplug/hotplug.o 00:03:39.814 CC examples/nvme/reconnect/reconnect.o 00:03:39.814 CC test/event/reactor/reactor.o 00:03:39.814 CC examples/nvme/arbitration/arbitration.o 00:03:39.814 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.814 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.814 TEST_HEADER include/spdk/keyring.h 00:03:39.814 CC app/fio/nvme/fio_plugin.o 00:03:39.815 TEST_HEADER include/spdk/keyring_module.h 00:03:39.815 CC test/thread/poller_perf/poller_perf.o 00:03:39.815 TEST_HEADER include/spdk/likely.h 00:03:39.815 TEST_HEADER include/spdk/log.h 00:03:39.815 TEST_HEADER include/spdk/lvol.h 00:03:39.815 CC test/nvme/aer/aer.o 00:03:39.815 TEST_HEADER include/spdk/memory.h 00:03:39.815 TEST_HEADER include/spdk/mmio.h 00:03:39.815 TEST_HEADER include/spdk/nbd.h 00:03:39.815 TEST_HEADER include/spdk/notify.h 00:03:39.815 TEST_HEADER include/spdk/nvme.h 00:03:39.815 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.815 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.815 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.815 CC examples/blob/cli/blobcli.o 00:03:39.815 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.815 CC examples/blob/hello_world/hello_blob.o 00:03:39.815 CC examples/bdev/hello_world/hello_bdev.o 00:03:39.815 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.815 CC examples/nvmf/nvmf/nvmf.o 00:03:39.815 CC examples/thread/thread/thread_ex.o 00:03:39.815 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.815 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.815 CC test/dma/test_dma/test_dma.o 00:03:39.815 TEST_HEADER include/spdk/nvmf.h 00:03:39.815 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.815 CC test/blobfs/mkfs/mkfs.o 00:03:39.815 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.815 CC test/accel/dif/dif.o 00:03:39.815 CC test/bdev/bdevio/bdevio.o 00:03:39.815 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.815 CC app/fio/bdev/fio_plugin.o 00:03:39.815 TEST_HEADER include/spdk/opal.h 00:03:39.815 TEST_HEADER include/spdk/opal_spec.h 00:03:39.815 TEST_HEADER include/spdk/pci_ids.h 00:03:39.815 TEST_HEADER include/spdk/pipe.h 00:03:39.815 TEST_HEADER include/spdk/queue.h 00:03:39.815 CC test/app/bdev_svc/bdev_svc.o 00:03:39.815 TEST_HEADER include/spdk/reduce.h 00:03:39.815 TEST_HEADER include/spdk/rpc.h 00:03:39.815 TEST_HEADER include/spdk/scheduler.h 00:03:39.815 TEST_HEADER include/spdk/scsi.h 00:03:39.815 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.815 TEST_HEADER include/spdk/sock.h 00:03:39.815 TEST_HEADER include/spdk/stdinc.h 00:03:39.815 TEST_HEADER include/spdk/string.h 00:03:39.815 TEST_HEADER include/spdk/thread.h 00:03:39.815 TEST_HEADER include/spdk/trace.h 00:03:39.815 TEST_HEADER include/spdk/trace_parser.h 00:03:39.815 TEST_HEADER include/spdk/tree.h 00:03:39.815 TEST_HEADER include/spdk/ublk.h 00:03:39.815 TEST_HEADER include/spdk/util.h 00:03:39.815 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.815 LINK spdk_lspci 00:03:39.815 TEST_HEADER include/spdk/uuid.h 00:03:39.815 TEST_HEADER include/spdk/version.h 00:03:39.815 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.815 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.815 CC test/lvol/esnap/esnap.o 00:03:39.815 TEST_HEADER include/spdk/vhost.h 00:03:39.815 TEST_HEADER include/spdk/vmd.h 00:03:39.815 TEST_HEADER include/spdk/xor.h 00:03:39.815 TEST_HEADER include/spdk/zipf.h 00:03:39.815 CXX test/cpp_headers/accel.o 00:03:40.079 LINK rpc_client_test 00:03:40.079 LINK spdk_nvme_discover 00:03:40.079 LINK lsvmd 00:03:40.079 LINK interrupt_tgt 00:03:40.079 LINK event_perf 00:03:40.079 LINK reactor_perf 00:03:40.079 LINK led 00:03:40.079 LINK reactor 00:03:40.079 LINK zipf 00:03:40.079 LINK nvmf_tgt 00:03:40.079 LINK poller_perf 00:03:40.079 LINK vhost 00:03:40.079 LINK iscsi_tgt 00:03:40.079 LINK spdk_trace_record 00:03:40.079 LINK spdk_tgt 00:03:40.079 LINK ioat_perf 00:03:40.079 LINK verify 00:03:40.079 LINK hello_world 00:03:40.341 LINK bdev_svc 00:03:40.341 LINK hello_sock 00:03:40.341 LINK hotplug 00:03:40.341 LINK mkfs 00:03:40.341 LINK hello_blob 00:03:40.341 LINK hello_bdev 00:03:40.341 LINK mem_callbacks 00:03:40.341 LINK thread 00:03:40.341 CXX test/cpp_headers/accel_module.o 00:03:40.341 LINK aer 00:03:40.341 LINK spdk_dd 00:03:40.341 LINK arbitration 00:03:40.341 LINK idxd_perf 00:03:40.341 LINK nvmf 00:03:40.341 LINK reconnect 00:03:40.607 CC test/nvme/reset/reset.o 00:03:40.607 LINK spdk_trace 00:03:40.607 CXX test/cpp_headers/assert.o 00:03:40.607 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.607 CC test/app/histogram_perf/histogram_perf.o 00:03:40.607 CC test/env/vtophys/vtophys.o 00:03:40.607 LINK bdevio 00:03:40.607 LINK dif 00:03:40.607 LINK test_dma 00:03:40.607 CC test/nvme/e2edp/nvme_dp.o 00:03:40.607 CC test/nvme/overhead/overhead.o 00:03:40.607 CC test/nvme/sgl/sgl.o 00:03:40.607 LINK accel_perf 00:03:40.607 CC test/app/jsoncat/jsoncat.o 00:03:40.607 CXX test/cpp_headers/barrier.o 00:03:40.607 CC test/event/app_repeat/app_repeat.o 00:03:40.607 LINK nvme_manage 00:03:40.607 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.607 CXX test/cpp_headers/base64.o 00:03:40.875 CC test/event/scheduler/scheduler.o 00:03:40.875 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.875 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.875 CC examples/nvme/abort/abort.o 00:03:40.875 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.875 CC test/env/memory/memory_ut.o 00:03:40.875 LINK blobcli 00:03:40.875 CXX test/cpp_headers/bdev.o 00:03:40.875 CC test/app/stub/stub.o 00:03:40.875 LINK spdk_bdev 00:03:40.875 LINK spdk_nvme 00:03:40.875 CC test/env/pci/pci_ut.o 00:03:40.875 CC test/nvme/err_injection/err_injection.o 00:03:40.875 LINK histogram_perf 00:03:40.875 CXX test/cpp_headers/bdev_module.o 00:03:40.875 CC test/nvme/startup/startup.o 00:03:40.875 LINK vtophys 00:03:40.875 CC test/nvme/reserve/reserve.o 00:03:40.875 LINK cmb_copy 00:03:40.875 CC test/nvme/simple_copy/simple_copy.o 00:03:40.875 LINK jsoncat 00:03:40.875 CC test/nvme/connect_stress/connect_stress.o 00:03:40.875 LINK app_repeat 00:03:41.136 CC test/nvme/compliance/nvme_compliance.o 00:03:41.136 LINK reset 00:03:41.136 CXX test/cpp_headers/bdev_zone.o 00:03:41.136 CC test/nvme/boot_partition/boot_partition.o 00:03:41.136 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.136 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.136 CC test/nvme/fdp/fdp.o 00:03:41.136 CXX test/cpp_headers/bit_array.o 00:03:41.136 CXX test/cpp_headers/bit_pool.o 00:03:41.136 CXX test/cpp_headers/blob_bdev.o 00:03:41.136 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.136 CXX test/cpp_headers/blobfs.o 00:03:41.136 CXX test/cpp_headers/blob.o 00:03:41.136 CXX test/cpp_headers/conf.o 00:03:41.136 CXX test/cpp_headers/config.o 00:03:41.136 CC test/nvme/cuse/cuse.o 00:03:41.136 LINK env_dpdk_post_init 00:03:41.136 CXX test/cpp_headers/cpuset.o 00:03:41.136 LINK sgl 00:03:41.136 CXX test/cpp_headers/crc16.o 00:03:41.136 LINK pmr_persistence 00:03:41.136 LINK nvme_dp 00:03:41.136 CXX test/cpp_headers/crc32.o 00:03:41.136 LINK stub 00:03:41.136 LINK scheduler 00:03:41.136 LINK overhead 00:03:41.136 LINK spdk_nvme_perf 00:03:41.136 CXX test/cpp_headers/crc64.o 00:03:41.400 CXX test/cpp_headers/dif.o 00:03:41.400 CXX test/cpp_headers/dma.o 00:03:41.400 CXX test/cpp_headers/endian.o 00:03:41.400 CXX test/cpp_headers/env_dpdk.o 00:03:41.400 LINK err_injection 00:03:41.400 LINK startup 00:03:41.400 LINK spdk_nvme_identify 00:03:41.400 CXX test/cpp_headers/env.o 00:03:41.400 LINK reserve 00:03:41.400 LINK bdevperf 00:03:41.400 CXX test/cpp_headers/event.o 00:03:41.400 LINK boot_partition 00:03:41.400 LINK spdk_top 00:03:41.400 LINK connect_stress 00:03:41.400 CXX test/cpp_headers/fd_group.o 00:03:41.400 LINK doorbell_aers 00:03:41.400 CXX test/cpp_headers/fd.o 00:03:41.400 CXX test/cpp_headers/file.o 00:03:41.400 LINK fused_ordering 00:03:41.400 CXX test/cpp_headers/ftl.o 00:03:41.400 CXX test/cpp_headers/gpt_spec.o 00:03:41.400 CXX test/cpp_headers/hexlify.o 00:03:41.400 CXX test/cpp_headers/histogram_data.o 00:03:41.400 LINK simple_copy 00:03:41.400 CXX test/cpp_headers/idxd.o 00:03:41.400 LINK abort 00:03:41.400 CXX test/cpp_headers/idxd_spec.o 00:03:41.668 CXX test/cpp_headers/init.o 00:03:41.668 LINK nvme_fuzz 00:03:41.668 CXX test/cpp_headers/ioat.o 00:03:41.668 CXX test/cpp_headers/ioat_spec.o 00:03:41.668 CXX test/cpp_headers/iscsi_spec.o 00:03:41.668 CXX test/cpp_headers/json.o 00:03:41.668 CXX test/cpp_headers/jsonrpc.o 00:03:41.668 CXX test/cpp_headers/keyring.o 00:03:41.668 CXX test/cpp_headers/keyring_module.o 00:03:41.668 CXX test/cpp_headers/likely.o 00:03:41.668 LINK pci_ut 00:03:41.668 LINK vhost_fuzz 00:03:41.668 CXX test/cpp_headers/log.o 00:03:41.668 CXX test/cpp_headers/lvol.o 00:03:41.668 CXX test/cpp_headers/memory.o 00:03:41.668 CXX test/cpp_headers/mmio.o 00:03:41.668 CXX test/cpp_headers/nbd.o 00:03:41.668 CXX test/cpp_headers/notify.o 00:03:41.668 CXX test/cpp_headers/nvme.o 00:03:41.668 CXX test/cpp_headers/nvme_intel.o 00:03:41.668 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.668 LINK nvme_compliance 00:03:41.668 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.668 CXX test/cpp_headers/nvme_spec.o 00:03:41.668 CXX test/cpp_headers/nvme_zns.o 00:03:41.668 LINK fdp 00:03:41.668 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.668 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.668 CXX test/cpp_headers/nvmf.o 00:03:41.668 CXX test/cpp_headers/nvmf_spec.o 00:03:41.930 CXX test/cpp_headers/nvmf_transport.o 00:03:41.930 CXX test/cpp_headers/opal.o 00:03:41.930 CXX test/cpp_headers/opal_spec.o 00:03:41.930 CXX test/cpp_headers/pci_ids.o 00:03:41.930 CXX test/cpp_headers/pipe.o 00:03:41.930 CXX test/cpp_headers/queue.o 00:03:41.930 LINK memory_ut 00:03:41.930 CXX test/cpp_headers/reduce.o 00:03:41.930 CXX test/cpp_headers/rpc.o 00:03:41.930 CXX test/cpp_headers/scheduler.o 00:03:41.930 CXX test/cpp_headers/scsi.o 00:03:41.930 CXX test/cpp_headers/scsi_spec.o 00:03:41.930 CXX test/cpp_headers/sock.o 00:03:41.930 CXX test/cpp_headers/stdinc.o 00:03:41.930 CXX test/cpp_headers/string.o 00:03:41.930 CXX test/cpp_headers/thread.o 00:03:41.930 CXX test/cpp_headers/trace.o 00:03:41.930 CXX test/cpp_headers/trace_parser.o 00:03:41.930 CXX test/cpp_headers/tree.o 00:03:41.930 CXX test/cpp_headers/ublk.o 00:03:41.930 CXX test/cpp_headers/util.o 00:03:41.930 CXX test/cpp_headers/uuid.o 00:03:41.930 CXX test/cpp_headers/version.o 00:03:41.930 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.930 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.930 CXX test/cpp_headers/vhost.o 00:03:41.930 CXX test/cpp_headers/vmd.o 00:03:41.930 CXX test/cpp_headers/xor.o 00:03:41.930 CXX test/cpp_headers/zipf.o 00:03:42.864 LINK cuse 00:03:43.121 LINK iscsi_fuzz 00:03:46.400 LINK esnap 00:03:46.400 00:03:46.400 real 0m40.771s 00:03:46.400 user 7m36.848s 00:03:46.400 sys 1m51.916s 00:03:46.400 16:24:53 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:46.400 16:24:53 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.400 ************************************ 00:03:46.400 END TEST make 00:03:46.400 ************************************ 00:03:46.400 16:24:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.400 16:24:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.400 16:24:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.400 16:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.400 16:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.400 16:24:53 -- pm/common@44 -- $ pid=1527418 00:03:46.400 16:24:53 -- pm/common@50 -- $ kill -TERM 1527418 00:03:46.400 16:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.400 16:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.400 16:24:53 -- pm/common@44 -- $ pid=1527420 00:03:46.400 16:24:53 -- pm/common@50 -- $ kill -TERM 1527420 00:03:46.400 16:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.400 16:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:46.400 16:24:53 -- pm/common@44 -- $ pid=1527422 00:03:46.400 16:24:53 -- pm/common@50 -- $ kill -TERM 1527422 00:03:46.400 16:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.400 16:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:46.400 16:24:53 -- pm/common@44 -- $ pid=1527457 00:03:46.400 16:24:53 -- pm/common@50 -- $ sudo -E kill -TERM 1527457 00:03:46.400 16:24:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.400 16:24:53 -- nvmf/common.sh@7 -- # uname -s 00:03:46.400 16:24:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.400 16:24:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.400 16:24:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.400 16:24:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.400 16:24:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.400 16:24:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.400 16:24:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.400 16:24:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.400 16:24:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.400 16:24:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.400 16:24:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:46.400 16:24:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:46.400 16:24:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.400 16:24:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.400 16:24:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:46.400 16:24:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.400 16:24:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:46.400 16:24:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.400 16:24:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.400 16:24:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.400 16:24:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.400 16:24:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.400 16:24:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.400 16:24:53 -- paths/export.sh@5 -- # export PATH 00:03:46.400 16:24:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.400 16:24:53 -- nvmf/common.sh@47 -- # : 0 00:03:46.400 16:24:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:46.400 16:24:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:46.400 16:24:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.400 16:24:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.400 16:24:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.400 16:24:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:46.400 16:24:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:46.400 16:24:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:46.400 16:24:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.400 16:24:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.400 16:24:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.400 16:24:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.400 16:24:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:46.400 16:24:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.400 16:24:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:46.400 16:24:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.400 16:24:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.400 16:24:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.400 16:24:53 -- spdk/autotest.sh@48 -- # udevadm_pid=1602871 00:03:46.401 16:24:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.401 16:24:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.401 16:24:53 -- pm/common@17 -- # local monitor 00:03:46.401 16:24:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.401 16:24:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.401 16:24:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.401 16:24:53 -- pm/common@21 -- # date +%s 00:03:46.401 16:24:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.401 16:24:53 -- pm/common@21 -- # date +%s 00:03:46.401 16:24:53 -- pm/common@25 -- # sleep 1 00:03:46.401 16:24:53 -- pm/common@21 -- # date +%s 00:03:46.401 16:24:53 -- pm/common@21 -- # date +%s 00:03:46.401 16:24:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715783093 00:03:46.401 16:24:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715783093 00:03:46.401 16:24:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715783093 00:03:46.401 16:24:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715783093 00:03:46.401 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715783093_collect-vmstat.pm.log 00:03:46.401 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715783093_collect-cpu-load.pm.log 00:03:46.401 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715783093_collect-cpu-temp.pm.log 00:03:46.401 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715783093_collect-bmc-pm.bmc.pm.log 00:03:47.335 16:24:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.335 16:24:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.335 16:24:54 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:47.335 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.335 16:24:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.335 16:24:54 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:47.335 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.335 16:24:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:47.335 16:24:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.335 16:24:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.335 16:24:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:47.335 16:24:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.335 16:24:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.335 16:24:54 -- common/autotest_common.sh@1451 -- # uname 00:03:47.335 16:24:54 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:47.335 16:24:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.335 16:24:54 -- common/autotest_common.sh@1471 -- # uname 00:03:47.335 16:24:54 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:47.335 16:24:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:47.335 16:24:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:47.335 16:24:54 -- spdk/autotest.sh@72 -- # hash lcov 00:03:47.335 16:24:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:47.335 16:24:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:47.335 --rc lcov_branch_coverage=1 00:03:47.335 --rc lcov_function_coverage=1 00:03:47.335 --rc genhtml_branch_coverage=1 00:03:47.335 --rc genhtml_function_coverage=1 00:03:47.335 --rc genhtml_legend=1 00:03:47.335 --rc geninfo_all_blocks=1 00:03:47.335 ' 00:03:47.335 16:24:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:47.335 --rc lcov_branch_coverage=1 00:03:47.335 --rc lcov_function_coverage=1 00:03:47.335 --rc genhtml_branch_coverage=1 00:03:47.335 --rc genhtml_function_coverage=1 00:03:47.335 --rc genhtml_legend=1 00:03:47.335 --rc geninfo_all_blocks=1 00:03:47.335 ' 00:03:47.335 16:24:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:47.335 --rc lcov_branch_coverage=1 00:03:47.335 --rc lcov_function_coverage=1 00:03:47.335 --rc genhtml_branch_coverage=1 00:03:47.335 --rc genhtml_function_coverage=1 00:03:47.335 --rc genhtml_legend=1 00:03:47.335 --rc geninfo_all_blocks=1 00:03:47.335 --no-external' 00:03:47.335 16:24:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:47.335 --rc lcov_branch_coverage=1 00:03:47.335 --rc lcov_function_coverage=1 00:03:47.335 --rc genhtml_branch_coverage=1 00:03:47.335 --rc genhtml_function_coverage=1 00:03:47.335 --rc genhtml_legend=1 00:03:47.335 --rc geninfo_all_blocks=1 00:03:47.336 --no-external' 00:03:47.336 16:24:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:47.594 lcov: LCOV version 1.14 00:03:47.594 16:24:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.825 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:01.723 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:01.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:01.723 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:01.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:01.723 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:19.797 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:19.798 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:19.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:19.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:19.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:19.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:19.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:19.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:19.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:19.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:20.733 16:25:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:20.733 16:25:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:20.733 16:25:27 -- common/autotest_common.sh@10 -- # set +x 00:04:20.733 16:25:27 -- spdk/autotest.sh@91 -- # rm -f 00:04:20.733 16:25:27 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.107 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:22.107 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:22.107 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:22.107 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:22.107 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:22.107 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:22.107 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:22.107 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:22.107 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:22.107 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:22.107 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:22.107 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:22.107 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:22.107 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:22.107 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:22.107 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:22.107 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:22.107 16:25:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:22.107 16:25:29 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:22.107 16:25:29 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:22.107 16:25:29 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:22.107 16:25:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:22.107 16:25:29 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:22.107 16:25:29 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:22.107 16:25:29 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.107 16:25:29 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:22.107 16:25:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:22.107 16:25:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.107 16:25:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:22.107 16:25:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:22.107 16:25:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:22.107 16:25:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:22.365 No valid GPT data, bailing 00:04:22.365 16:25:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.365 16:25:29 -- scripts/common.sh@391 -- # pt= 00:04:22.365 16:25:29 -- scripts/common.sh@392 -- # return 1 00:04:22.365 16:25:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:22.365 1+0 records in 00:04:22.365 1+0 records out 00:04:22.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00220638 s, 475 MB/s 00:04:22.365 16:25:29 -- spdk/autotest.sh@118 -- # sync 00:04:22.365 16:25:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:22.365 16:25:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:22.365 16:25:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.264 16:25:31 -- spdk/autotest.sh@124 -- # uname -s 00:04:24.264 16:25:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:24.264 16:25:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:24.264 16:25:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.264 16:25:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.264 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:04:24.264 ************************************ 00:04:24.264 START TEST setup.sh 00:04:24.264 ************************************ 00:04:24.264 16:25:31 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:24.264 * Looking for test storage... 00:04:24.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.264 16:25:31 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:24.264 16:25:31 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:24.264 16:25:31 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:24.264 16:25:31 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.264 16:25:31 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.264 16:25:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.264 ************************************ 00:04:24.264 START TEST acl 00:04:24.264 ************************************ 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:24.264 * Looking for test storage... 00:04:24.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.264 16:25:31 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.264 16:25:31 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:24.264 16:25:31 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:24.264 16:25:31 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:24.264 16:25:31 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:24.264 16:25:31 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:24.264 16:25:31 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:24.264 16:25:31 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.264 16:25:31 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.164 16:25:32 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:26.164 16:25:32 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:26.164 16:25:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:26.164 16:25:32 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:26.164 16:25:32 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.164 16:25:32 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.097 Hugepages 00:04:27.097 node hugesize free / total 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 00:04:27.097 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:27.097 16:25:34 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:27.097 16:25:34 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.097 16:25:34 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.097 16:25:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.097 ************************************ 00:04:27.097 START TEST denied 00:04:27.097 ************************************ 00:04:27.097 16:25:34 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:27.097 16:25:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:04:27.097 16:25:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:27.097 16:25:34 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:04:27.097 16:25:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.097 16:25:34 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.000 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.000 16:25:35 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.580 00:04:31.580 real 0m4.023s 00:04:31.580 user 0m1.276s 00:04:31.580 sys 0m1.932s 00:04:31.580 16:25:38 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.580 16:25:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:31.580 ************************************ 00:04:31.580 END TEST denied 00:04:31.580 ************************************ 00:04:31.580 16:25:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:31.580 16:25:38 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.580 16:25:38 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.580 16:25:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.580 ************************************ 00:04:31.580 START TEST allowed 00:04:31.580 ************************************ 00:04:31.580 16:25:38 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:31.580 16:25:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:04:31.580 16:25:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:31.580 16:25:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:04:31.580 16:25:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.580 16:25:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.110 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.110 16:25:40 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:34.110 16:25:40 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:34.110 16:25:40 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:34.110 16:25:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.110 16:25:40 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.484 00:04:35.484 real 0m4.211s 00:04:35.484 user 0m1.229s 00:04:35.484 sys 0m1.979s 00:04:35.484 16:25:42 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.484 16:25:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:35.484 ************************************ 00:04:35.484 END TEST allowed 00:04:35.484 ************************************ 00:04:35.484 00:04:35.484 real 0m11.228s 00:04:35.484 user 0m3.694s 00:04:35.484 sys 0m5.800s 00:04:35.484 16:25:42 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.484 16:25:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.484 ************************************ 00:04:35.484 END TEST acl 00:04:35.484 ************************************ 00:04:35.484 16:25:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.484 16:25:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.484 16:25:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.484 16:25:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.484 ************************************ 00:04:35.484 START TEST hugepages 00:04:35.484 ************************************ 00:04:35.484 16:25:42 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.484 * Looking for test storage... 00:04:35.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 34580000 kB' 'MemAvailable: 39306196 kB' 'Buffers: 2696 kB' 'Cached: 19367864 kB' 'SwapCached: 0 kB' 'Active: 15344000 kB' 'Inactive: 4481728 kB' 'Active(anon): 14729720 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458540 kB' 'Mapped: 179840 kB' 'Shmem: 14274552 kB' 'KReclaimable: 248052 kB' 'Slab: 627924 kB' 'SReclaimable: 248052 kB' 'SUnreclaim: 379872 kB' 'KernelStack: 13040 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 15859820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.744 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.746 16:25:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:35.746 16:25:42 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.746 16:25:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.746 16:25:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.746 ************************************ 00:04:35.746 START TEST default_setup 00:04:35.746 ************************************ 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.746 16:25:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.119 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.119 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.119 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:38.061 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36676428 kB' 'MemAvailable: 41402548 kB' 'Buffers: 2696 kB' 'Cached: 19367964 kB' 'SwapCached: 0 kB' 'Active: 15361816 kB' 'Inactive: 4481728 kB' 'Active(anon): 14747536 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476164 kB' 'Mapped: 179944 kB' 'Shmem: 14274652 kB' 'KReclaimable: 247900 kB' 'Slab: 627660 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379760 kB' 'KernelStack: 13088 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198492 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.061 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36677608 kB' 'MemAvailable: 41403728 kB' 'Buffers: 2696 kB' 'Cached: 19367964 kB' 'SwapCached: 0 kB' 'Active: 15362388 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748108 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476788 kB' 'Mapped: 179944 kB' 'Shmem: 14274652 kB' 'KReclaimable: 247900 kB' 'Slab: 627644 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379744 kB' 'KernelStack: 13040 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.064 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36678556 kB' 'MemAvailable: 41404676 kB' 'Buffers: 2696 kB' 'Cached: 19367984 kB' 'SwapCached: 0 kB' 'Active: 15361728 kB' 'Inactive: 4481728 kB' 'Active(anon): 14747448 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476052 kB' 'Mapped: 179860 kB' 'Shmem: 14274672 kB' 'KReclaimable: 247900 kB' 'Slab: 627692 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379792 kB' 'KernelStack: 13008 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.065 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.067 nr_hugepages=1024 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.067 resv_hugepages=0 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.067 surplus_hugepages=0 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.067 anon_hugepages=0 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36678556 kB' 'MemAvailable: 41404676 kB' 'Buffers: 2696 kB' 'Cached: 19368004 kB' 'SwapCached: 0 kB' 'Active: 15361736 kB' 'Inactive: 4481728 kB' 'Active(anon): 14747456 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476020 kB' 'Mapped: 179860 kB' 'Shmem: 14274692 kB' 'KReclaimable: 247900 kB' 'Slab: 627692 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379792 kB' 'KernelStack: 12992 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20650696 kB' 'MemUsed: 12226244 kB' 'SwapCached: 0 kB' 'Active: 8126560 kB' 'Inactive: 1090600 kB' 'Active(anon): 7795004 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899012 kB' 'Mapped: 65128 kB' 'AnonPages: 321220 kB' 'Shmem: 7476856 kB' 'KernelStack: 8024 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138252 kB' 'Slab: 316752 kB' 'SReclaimable: 138252 kB' 'SUnreclaim: 178500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.069 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.070 node0=1024 expecting 1024 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.070 00:04:38.070 real 0m2.482s 00:04:38.070 user 0m0.657s 00:04:38.070 sys 0m0.881s 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.070 16:25:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:38.070 ************************************ 00:04:38.070 END TEST default_setup 00:04:38.070 ************************************ 00:04:38.329 16:25:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:38.329 16:25:45 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.329 16:25:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.329 16:25:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.329 ************************************ 00:04:38.329 START TEST per_node_1G_alloc 00:04:38.329 ************************************ 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.329 16:25:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.710 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.710 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.710 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.710 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.710 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.710 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.710 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.710 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.710 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.710 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.710 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.710 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.710 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.710 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.710 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.710 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.710 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.710 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36697232 kB' 'MemAvailable: 41423352 kB' 'Buffers: 2696 kB' 'Cached: 19368080 kB' 'SwapCached: 0 kB' 'Active: 15362704 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748424 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476772 kB' 'Mapped: 179892 kB' 'Shmem: 14274768 kB' 'KReclaimable: 247900 kB' 'Slab: 627540 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379640 kB' 'KernelStack: 13056 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.711 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36703960 kB' 'MemAvailable: 41430080 kB' 'Buffers: 2696 kB' 'Cached: 19368080 kB' 'SwapCached: 0 kB' 'Active: 15363076 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748796 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477144 kB' 'Mapped: 179872 kB' 'Shmem: 14274768 kB' 'KReclaimable: 247900 kB' 'Slab: 627516 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379616 kB' 'KernelStack: 13088 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198604 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.712 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.713 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36704008 kB' 'MemAvailable: 41430128 kB' 'Buffers: 2696 kB' 'Cached: 19368100 kB' 'SwapCached: 0 kB' 'Active: 15362396 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748116 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476480 kB' 'Mapped: 179872 kB' 'Shmem: 14274788 kB' 'KReclaimable: 247900 kB' 'Slab: 627560 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379660 kB' 'KernelStack: 13072 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198604 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.714 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.715 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.716 nr_hugepages=1024 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.716 resv_hugepages=0 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.716 surplus_hugepages=0 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.716 anon_hugepages=0 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36704008 kB' 'MemAvailable: 41430128 kB' 'Buffers: 2696 kB' 'Cached: 19368128 kB' 'SwapCached: 0 kB' 'Active: 15362716 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748436 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476816 kB' 'Mapped: 179872 kB' 'Shmem: 14274816 kB' 'KReclaimable: 247900 kB' 'Slab: 627560 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379660 kB' 'KernelStack: 13072 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198604 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.716 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.717 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21715716 kB' 'MemUsed: 11161224 kB' 'SwapCached: 0 kB' 'Active: 8126488 kB' 'Inactive: 1090600 kB' 'Active(anon): 7794932 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899016 kB' 'Mapped: 65140 kB' 'AnonPages: 321144 kB' 'Shmem: 7476860 kB' 'KernelStack: 8056 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138252 kB' 'Slab: 316716 kB' 'SReclaimable: 138252 kB' 'SUnreclaim: 178464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.718 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.719 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 14988632 kB' 'MemUsed: 12676156 kB' 'SwapCached: 0 kB' 'Active: 7236324 kB' 'Inactive: 3391128 kB' 'Active(anon): 6953600 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10471852 kB' 'Mapped: 114732 kB' 'AnonPages: 155676 kB' 'Shmem: 6798000 kB' 'KernelStack: 5016 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109648 kB' 'Slab: 310844 kB' 'SReclaimable: 109648 kB' 'SUnreclaim: 201196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.720 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.721 node0=512 expecting 512 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.721 node1=512 expecting 512 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.721 00:04:39.721 real 0m1.540s 00:04:39.721 user 0m0.641s 00:04:39.721 sys 0m0.867s 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.721 16:25:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.721 ************************************ 00:04:39.721 END TEST per_node_1G_alloc 00:04:39.721 ************************************ 00:04:39.721 16:25:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:39.721 16:25:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.721 16:25:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.721 16:25:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.721 ************************************ 00:04:39.721 START TEST even_2G_alloc 00:04:39.721 ************************************ 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.721 16:25:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.093 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.094 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.094 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.094 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.094 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.094 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.094 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.094 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.094 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.094 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.094 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.094 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.094 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.094 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.094 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.094 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.094 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36707068 kB' 'MemAvailable: 41433188 kB' 'Buffers: 2696 kB' 'Cached: 19368220 kB' 'SwapCached: 0 kB' 'Active: 15362644 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748364 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476796 kB' 'Mapped: 179888 kB' 'Shmem: 14274908 kB' 'KReclaimable: 247900 kB' 'Slab: 627672 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379772 kB' 'KernelStack: 13056 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.359 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.360 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36709660 kB' 'MemAvailable: 41435780 kB' 'Buffers: 2696 kB' 'Cached: 19368224 kB' 'SwapCached: 0 kB' 'Active: 15363248 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748968 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477456 kB' 'Mapped: 179964 kB' 'Shmem: 14274912 kB' 'KReclaimable: 247900 kB' 'Slab: 627676 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379776 kB' 'KernelStack: 13088 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15880468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198492 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.361 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.362 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36710084 kB' 'MemAvailable: 41436204 kB' 'Buffers: 2696 kB' 'Cached: 19368228 kB' 'SwapCached: 0 kB' 'Active: 15363144 kB' 'Inactive: 4481728 kB' 'Active(anon): 14748864 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477304 kB' 'Mapped: 179884 kB' 'Shmem: 14274916 kB' 'KReclaimable: 247900 kB' 'Slab: 627632 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379732 kB' 'KernelStack: 13184 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15881656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.363 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.364 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.365 nr_hugepages=1024 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.365 resv_hugepages=0 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.365 surplus_hugepages=0 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.365 anon_hugepages=0 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36712664 kB' 'MemAvailable: 41438784 kB' 'Buffers: 2696 kB' 'Cached: 19368228 kB' 'SwapCached: 0 kB' 'Active: 15363424 kB' 'Inactive: 4481728 kB' 'Active(anon): 14749144 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477420 kB' 'Mapped: 179884 kB' 'Shmem: 14274916 kB' 'KReclaimable: 247900 kB' 'Slab: 627632 kB' 'SReclaimable: 247900 kB' 'SUnreclaim: 379732 kB' 'KernelStack: 13136 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15882872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198700 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.365 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.366 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21725512 kB' 'MemUsed: 11151428 kB' 'SwapCached: 0 kB' 'Active: 8127548 kB' 'Inactive: 1090600 kB' 'Active(anon): 7795992 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899028 kB' 'Mapped: 64216 kB' 'AnonPages: 322188 kB' 'Shmem: 7476872 kB' 'KernelStack: 8472 kB' 'PageTables: 5932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138252 kB' 'Slab: 316616 kB' 'SReclaimable: 138252 kB' 'SUnreclaim: 178364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.367 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15001536 kB' 'MemUsed: 12663252 kB' 'SwapCached: 0 kB' 'Active: 7231916 kB' 'Inactive: 3391128 kB' 'Active(anon): 6949192 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10471952 kB' 'Mapped: 114632 kB' 'AnonPages: 151264 kB' 'Shmem: 6798100 kB' 'KernelStack: 4872 kB' 'PageTables: 2784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109648 kB' 'Slab: 310944 kB' 'SReclaimable: 109648 kB' 'SUnreclaim: 201296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.368 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.369 node0=512 expecting 512 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.369 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.370 node1=512 expecting 512 00:04:41.370 16:25:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.370 00:04:41.370 real 0m1.635s 00:04:41.370 user 0m0.698s 00:04:41.370 sys 0m0.904s 00:04:41.370 16:25:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.370 16:25:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.370 ************************************ 00:04:41.370 END TEST even_2G_alloc 00:04:41.370 ************************************ 00:04:41.370 16:25:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:41.370 16:25:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.370 16:25:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.370 16:25:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.628 ************************************ 00:04:41.628 START TEST odd_alloc 00:04:41.628 ************************************ 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.628 16:25:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.006 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.006 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.006 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.006 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.006 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.006 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.006 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.006 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.006 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.006 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.006 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.006 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.006 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.006 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.006 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.006 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.006 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.006 16:25:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36711736 kB' 'MemAvailable: 41437860 kB' 'Buffers: 2696 kB' 'Cached: 19368336 kB' 'SwapCached: 0 kB' 'Active: 15357980 kB' 'Inactive: 4481728 kB' 'Active(anon): 14743700 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471964 kB' 'Mapped: 178880 kB' 'Shmem: 14275024 kB' 'KReclaimable: 247908 kB' 'Slab: 627524 kB' 'SReclaimable: 247908 kB' 'SUnreclaim: 379616 kB' 'KernelStack: 13008 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15854684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.006 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.007 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36715844 kB' 'MemAvailable: 41441968 kB' 'Buffers: 2696 kB' 'Cached: 19368340 kB' 'SwapCached: 0 kB' 'Active: 15358628 kB' 'Inactive: 4481728 kB' 'Active(anon): 14744348 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472784 kB' 'Mapped: 178892 kB' 'Shmem: 14275028 kB' 'KReclaimable: 247908 kB' 'Slab: 627508 kB' 'SReclaimable: 247908 kB' 'SUnreclaim: 379600 kB' 'KernelStack: 12992 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15854700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198380 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.008 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.009 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36715844 kB' 'MemAvailable: 41441968 kB' 'Buffers: 2696 kB' 'Cached: 19368360 kB' 'SwapCached: 0 kB' 'Active: 15357592 kB' 'Inactive: 4481728 kB' 'Active(anon): 14743312 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471676 kB' 'Mapped: 178876 kB' 'Shmem: 14275048 kB' 'KReclaimable: 247908 kB' 'Slab: 627508 kB' 'SReclaimable: 247908 kB' 'SUnreclaim: 379600 kB' 'KernelStack: 13008 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15854720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198380 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.010 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:43.011 nr_hugepages=1025 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.011 resv_hugepages=0 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.011 surplus_hugepages=0 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.011 anon_hugepages=0 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.011 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36716840 kB' 'MemAvailable: 41442964 kB' 'Buffers: 2696 kB' 'Cached: 19368364 kB' 'SwapCached: 0 kB' 'Active: 15358528 kB' 'Inactive: 4481728 kB' 'Active(anon): 14744248 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472604 kB' 'Mapped: 178876 kB' 'Shmem: 14275052 kB' 'KReclaimable: 247908 kB' 'Slab: 627508 kB' 'SReclaimable: 247908 kB' 'SUnreclaim: 379600 kB' 'KernelStack: 13024 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15854744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.012 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21724940 kB' 'MemUsed: 11152000 kB' 'SwapCached: 0 kB' 'Active: 8126640 kB' 'Inactive: 1090600 kB' 'Active(anon): 7795084 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899068 kB' 'Mapped: 64240 kB' 'AnonPages: 321324 kB' 'Shmem: 7476912 kB' 'KernelStack: 8136 kB' 'PageTables: 5440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138236 kB' 'Slab: 316576 kB' 'SReclaimable: 138236 kB' 'SUnreclaim: 178340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.013 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.014 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 14994100 kB' 'MemUsed: 12670688 kB' 'SwapCached: 0 kB' 'Active: 7231844 kB' 'Inactive: 3391128 kB' 'Active(anon): 6949120 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10471992 kB' 'Mapped: 114636 kB' 'AnonPages: 151184 kB' 'Shmem: 6798140 kB' 'KernelStack: 4840 kB' 'PageTables: 2700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109712 kB' 'Slab: 310964 kB' 'SReclaimable: 109712 kB' 'SUnreclaim: 201252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.015 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.016 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:43.017 node0=512 expecting 513 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:43.017 node1=513 expecting 512 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:43.017 00:04:43.017 real 0m1.545s 00:04:43.017 user 0m0.651s 00:04:43.017 sys 0m0.862s 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.017 16:25:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.017 ************************************ 00:04:43.017 END TEST odd_alloc 00:04:43.017 ************************************ 00:04:43.017 16:25:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:43.017 16:25:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.017 16:25:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.017 16:25:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.017 ************************************ 00:04:43.017 START TEST custom_alloc 00:04:43.017 ************************************ 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.017 16:25:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.387 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.387 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.387 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.387 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.387 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.387 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.387 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.387 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.387 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.387 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.387 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.387 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.387 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.387 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.387 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.387 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.387 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35647372 kB' 'MemAvailable: 40373512 kB' 'Buffers: 2696 kB' 'Cached: 19368468 kB' 'SwapCached: 0 kB' 'Active: 15356720 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742440 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470516 kB' 'Mapped: 178860 kB' 'Shmem: 14275156 kB' 'KReclaimable: 247940 kB' 'Slab: 627408 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379468 kB' 'KernelStack: 12976 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15854740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.651 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35642836 kB' 'MemAvailable: 40368976 kB' 'Buffers: 2696 kB' 'Cached: 19368472 kB' 'SwapCached: 0 kB' 'Active: 15359840 kB' 'Inactive: 4481728 kB' 'Active(anon): 14745560 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473668 kB' 'Mapped: 179316 kB' 'Shmem: 14275160 kB' 'KReclaimable: 247940 kB' 'Slab: 627384 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379444 kB' 'KernelStack: 12944 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15858296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198524 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.652 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.653 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35639740 kB' 'MemAvailable: 40365880 kB' 'Buffers: 2696 kB' 'Cached: 19368488 kB' 'SwapCached: 0 kB' 'Active: 15361996 kB' 'Inactive: 4481728 kB' 'Active(anon): 14747716 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475764 kB' 'Mapped: 179716 kB' 'Shmem: 14275176 kB' 'KReclaimable: 247940 kB' 'Slab: 627384 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379444 kB' 'KernelStack: 12976 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15861104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198512 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.654 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.655 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:44.656 nr_hugepages=1536 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.656 resv_hugepages=0 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.656 surplus_hugepages=0 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.656 anon_hugepages=0 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35639740 kB' 'MemAvailable: 40365880 kB' 'Buffers: 2696 kB' 'Cached: 19368508 kB' 'SwapCached: 0 kB' 'Active: 15356548 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742268 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470308 kB' 'Mapped: 178908 kB' 'Shmem: 14275196 kB' 'KReclaimable: 247940 kB' 'Slab: 627468 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379528 kB' 'KernelStack: 12960 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15856024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.656 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.657 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21711340 kB' 'MemUsed: 11165600 kB' 'SwapCached: 0 kB' 'Active: 8129048 kB' 'Inactive: 1090600 kB' 'Active(anon): 7797492 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899156 kB' 'Mapped: 64228 kB' 'AnonPages: 323568 kB' 'Shmem: 7477000 kB' 'KernelStack: 8072 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138236 kB' 'Slab: 316560 kB' 'SReclaimable: 138236 kB' 'SUnreclaim: 178324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.658 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.659 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 13928192 kB' 'MemUsed: 13736596 kB' 'SwapCached: 0 kB' 'Active: 7230416 kB' 'Inactive: 3391128 kB' 'Active(anon): 6947692 kB' 'Inactive(anon): 0 kB' 'Active(file): 282724 kB' 'Inactive(file): 3391128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10472092 kB' 'Mapped: 115072 kB' 'AnonPages: 149536 kB' 'Shmem: 6798240 kB' 'KernelStack: 4824 kB' 'PageTables: 2576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109704 kB' 'Slab: 310908 kB' 'SReclaimable: 109704 kB' 'SUnreclaim: 201204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.660 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.661 node0=512 expecting 512 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:44.661 node1=1024 expecting 1024 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:44.661 00:04:44.661 real 0m1.592s 00:04:44.661 user 0m0.644s 00:04:44.661 sys 0m0.916s 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.661 16:25:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.661 ************************************ 00:04:44.661 END TEST custom_alloc 00:04:44.661 ************************************ 00:04:44.661 16:25:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:44.661 16:25:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.661 16:25:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.661 16:25:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.661 ************************************ 00:04:44.661 START TEST no_shrink_alloc 00:04:44.661 ************************************ 00:04:44.661 16:25:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:44.661 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:44.661 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.661 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.661 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:44.661 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.662 16:25:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.056 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.056 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.056 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.056 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.056 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.056 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.056 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.056 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.056 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.056 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.056 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.056 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.056 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.056 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.056 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.056 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.056 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.056 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36676804 kB' 'MemAvailable: 41402944 kB' 'Buffers: 2696 kB' 'Cached: 19368596 kB' 'SwapCached: 0 kB' 'Active: 15357568 kB' 'Inactive: 4481728 kB' 'Active(anon): 14743288 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471288 kB' 'Mapped: 178920 kB' 'Shmem: 14275284 kB' 'KReclaimable: 247940 kB' 'Slab: 627444 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379504 kB' 'KernelStack: 13024 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36679648 kB' 'MemAvailable: 41405788 kB' 'Buffers: 2696 kB' 'Cached: 19368600 kB' 'SwapCached: 0 kB' 'Active: 15357076 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742796 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470828 kB' 'Mapped: 178956 kB' 'Shmem: 14275288 kB' 'KReclaimable: 247940 kB' 'Slab: 627460 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379520 kB' 'KernelStack: 13040 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198492 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36679152 kB' 'MemAvailable: 41405292 kB' 'Buffers: 2696 kB' 'Cached: 19368616 kB' 'SwapCached: 0 kB' 'Active: 15356788 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742508 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470452 kB' 'Mapped: 178876 kB' 'Shmem: 14275304 kB' 'KReclaimable: 247940 kB' 'Slab: 627444 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379504 kB' 'KernelStack: 13024 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198492 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.372 nr_hugepages=1024 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.372 resv_hugepages=0 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.372 surplus_hugepages=0 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.372 anon_hugepages=0 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36679152 kB' 'MemAvailable: 41405292 kB' 'Buffers: 2696 kB' 'Cached: 19368640 kB' 'SwapCached: 0 kB' 'Active: 15356820 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742540 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470452 kB' 'Mapped: 178876 kB' 'Shmem: 14275328 kB' 'KReclaimable: 247940 kB' 'Slab: 627444 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379504 kB' 'KernelStack: 13024 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198492 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20651824 kB' 'MemUsed: 12225116 kB' 'SwapCached: 0 kB' 'Active: 8125952 kB' 'Inactive: 1090600 kB' 'Active(anon): 7794396 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899160 kB' 'Mapped: 64236 kB' 'AnonPages: 320512 kB' 'Shmem: 7477004 kB' 'KernelStack: 8168 kB' 'PageTables: 5112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138236 kB' 'Slab: 316644 kB' 'SReclaimable: 138236 kB' 'SUnreclaim: 178408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.373 node0=1024 expecting 1024 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.373 16:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.751 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.751 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.751 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.751 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.751 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.751 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.751 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.751 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.751 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.751 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.751 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.751 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.751 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.751 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.751 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.751 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.751 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.751 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36682784 kB' 'MemAvailable: 41408924 kB' 'Buffers: 2696 kB' 'Cached: 19368712 kB' 'SwapCached: 0 kB' 'Active: 15357500 kB' 'Inactive: 4481728 kB' 'Active(anon): 14743220 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471084 kB' 'Mapped: 178908 kB' 'Shmem: 14275400 kB' 'KReclaimable: 247940 kB' 'Slab: 627276 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379336 kB' 'KernelStack: 13024 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198700 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36687492 kB' 'MemAvailable: 41413632 kB' 'Buffers: 2696 kB' 'Cached: 19368716 kB' 'SwapCached: 0 kB' 'Active: 15356992 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742712 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470556 kB' 'Mapped: 178884 kB' 'Shmem: 14275404 kB' 'KReclaimable: 247940 kB' 'Slab: 627364 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379424 kB' 'KernelStack: 13056 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.754 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36687492 kB' 'MemAvailable: 41413632 kB' 'Buffers: 2696 kB' 'Cached: 19368732 kB' 'SwapCached: 0 kB' 'Active: 15357004 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742724 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470556 kB' 'Mapped: 178884 kB' 'Shmem: 14275420 kB' 'KReclaimable: 247940 kB' 'Slab: 627364 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379424 kB' 'KernelStack: 13056 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.755 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.756 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.757 nr_hugepages=1024 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.757 resv_hugepages=0 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.757 surplus_hugepages=0 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.757 anon_hugepages=0 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36687492 kB' 'MemAvailable: 41413632 kB' 'Buffers: 2696 kB' 'Cached: 19368756 kB' 'SwapCached: 0 kB' 'Active: 15357060 kB' 'Inactive: 4481728 kB' 'Active(anon): 14742780 kB' 'Inactive(anon): 0 kB' 'Active(file): 614280 kB' 'Inactive(file): 4481728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470556 kB' 'Mapped: 178884 kB' 'Shmem: 14275444 kB' 'KReclaimable: 247940 kB' 'Slab: 627364 kB' 'SReclaimable: 247940 kB' 'SUnreclaim: 379424 kB' 'KernelStack: 13056 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15855408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1807964 kB' 'DirectMap2M: 20131840 kB' 'DirectMap1G: 47185920 kB' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.757 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.758 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20671260 kB' 'MemUsed: 12205680 kB' 'SwapCached: 0 kB' 'Active: 8126244 kB' 'Inactive: 1090600 kB' 'Active(anon): 7794688 kB' 'Inactive(anon): 0 kB' 'Active(file): 331556 kB' 'Inactive(file): 1090600 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8899160 kB' 'Mapped: 64244 kB' 'AnonPages: 320808 kB' 'Shmem: 7477004 kB' 'KernelStack: 8200 kB' 'PageTables: 5136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138236 kB' 'Slab: 316648 kB' 'SReclaimable: 138236 kB' 'SUnreclaim: 178412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.759 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.760 node0=1024 expecting 1024 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.760 00:04:47.760 real 0m3.114s 00:04:47.760 user 0m1.286s 00:04:47.760 sys 0m1.758s 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.760 16:25:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.760 ************************************ 00:04:47.760 END TEST no_shrink_alloc 00:04:47.760 ************************************ 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:48.018 16:25:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:48.018 00:04:48.018 real 0m12.332s 00:04:48.018 user 0m4.755s 00:04:48.018 sys 0m6.442s 00:04:48.018 16:25:54 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.018 16:25:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.018 ************************************ 00:04:48.018 END TEST hugepages 00:04:48.018 ************************************ 00:04:48.018 16:25:55 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:48.018 16:25:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.018 16:25:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.018 16:25:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.018 ************************************ 00:04:48.018 START TEST driver 00:04:48.018 ************************************ 00:04:48.018 16:25:55 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:48.018 * Looking for test storage... 00:04:48.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:48.018 16:25:55 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:48.018 16:25:55 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.018 16:25:55 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.544 16:25:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:50.544 16:25:57 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.544 16:25:57 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.544 16:25:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.544 ************************************ 00:04:50.544 START TEST guess_driver 00:04:50.544 ************************************ 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:50.544 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:50.544 Looking for driver=vfio-pci 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.544 16:25:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.918 16:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.854 16:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:52.854 16:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:52.854 16:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.854 16:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:52.854 16:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:52.854 16:26:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.854 16:26:00 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.383 00:04:55.383 real 0m4.952s 00:04:55.383 user 0m1.175s 00:04:55.383 sys 0m1.957s 00:04:55.383 16:26:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.383 16:26:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:55.383 ************************************ 00:04:55.383 END TEST guess_driver 00:04:55.383 ************************************ 00:04:55.383 00:04:55.383 real 0m7.485s 00:04:55.383 user 0m1.779s 00:04:55.383 sys 0m3.033s 00:04:55.383 16:26:02 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.383 16:26:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:55.383 ************************************ 00:04:55.383 END TEST driver 00:04:55.383 ************************************ 00:04:55.383 16:26:02 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:55.383 16:26:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.383 16:26:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.383 16:26:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.383 ************************************ 00:04:55.383 START TEST devices 00:04:55.383 ************************************ 00:04:55.383 16:26:02 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:55.642 * Looking for test storage... 00:04:55.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:55.642 16:26:02 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:55.642 16:26:02 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:55.642 16:26:02 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.642 16:26:02 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.015 16:26:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:57.015 16:26:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:57.015 16:26:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:57.015 16:26:04 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:57.015 No valid GPT data, bailing 00:04:57.015 16:26:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.015 16:26:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:57.015 16:26:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:57.016 16:26:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:57.016 16:26:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:57.016 16:26:04 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:57.016 16:26:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:57.016 16:26:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.016 16:26:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.016 16:26:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.016 ************************************ 00:04:57.016 START TEST nvme_mount 00:04:57.016 ************************************ 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.016 16:26:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:58.387 Creating new GPT entries in memory. 00:04:58.387 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.387 other utilities. 00:04:58.387 16:26:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.387 16:26:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.387 16:26:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.387 16:26:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.387 16:26:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:59.320 Creating new GPT entries in memory. 00:04:59.320 The operation has completed successfully. 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1624943 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:59.320 16:26:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.321 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.321 16:26:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.693 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.693 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.952 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:00.952 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:00.952 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.952 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.952 16:26:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:00.952 16:26:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:00.952 16:26:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.952 16:26:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:00.952 16:26:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.952 16:26:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.325 16:26:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.727 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.727 00:05:03.727 real 0m6.584s 00:05:03.727 user 0m1.589s 00:05:03.727 sys 0m2.607s 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.727 16:26:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:03.727 ************************************ 00:05:03.727 END TEST nvme_mount 00:05:03.727 ************************************ 00:05:03.727 16:26:10 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.727 16:26:10 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.727 16:26:10 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.727 16:26:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.727 ************************************ 00:05:03.727 START TEST dm_mount 00:05:03.727 ************************************ 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.727 16:26:10 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:04.664 Creating new GPT entries in memory. 00:05:04.664 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.664 other utilities. 00:05:04.664 16:26:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.664 16:26:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.664 16:26:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.664 16:26:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.664 16:26:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:06.039 Creating new GPT entries in memory. 00:05:06.039 The operation has completed successfully. 00:05:06.039 16:26:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.039 16:26:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.039 16:26:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.039 16:26:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.039 16:26:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:06.973 The operation has completed successfully. 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1627630 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.973 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.974 16:26:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.349 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.350 16:26:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.725 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:09.726 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:09.726 00:05:09.726 real 0m6.040s 00:05:09.726 user 0m1.153s 00:05:09.726 sys 0m1.792s 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.726 16:26:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:09.726 ************************************ 00:05:09.726 END TEST dm_mount 00:05:09.726 ************************************ 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.726 16:26:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.984 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:09.984 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:09.984 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.984 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.984 16:26:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:09.984 00:05:09.984 real 0m14.618s 00:05:09.984 user 0m3.414s 00:05:09.984 sys 0m5.477s 00:05:09.984 16:26:17 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.984 16:26:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.984 ************************************ 00:05:09.984 END TEST devices 00:05:09.984 ************************************ 00:05:10.242 00:05:10.242 real 0m45.921s 00:05:10.242 user 0m13.739s 00:05:10.242 sys 0m20.920s 00:05:10.242 16:26:17 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.242 16:26:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.242 ************************************ 00:05:10.242 END TEST setup.sh 00:05:10.242 ************************************ 00:05:10.242 16:26:17 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:11.616 Hugepages 00:05:11.616 node hugesize free / total 00:05:11.616 node0 1048576kB 0 / 0 00:05:11.616 node0 2048kB 2048 / 2048 00:05:11.616 node1 1048576kB 0 / 0 00:05:11.616 node1 2048kB 0 / 0 00:05:11.616 00:05:11.616 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.616 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:11.616 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:11.616 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:11.616 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:11.616 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:11.616 16:26:18 -- spdk/autotest.sh@130 -- # uname -s 00:05:11.616 16:26:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:11.616 16:26:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:11.616 16:26:18 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.993 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.993 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.993 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:13.930 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.930 16:26:21 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:14.867 16:26:22 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:14.868 16:26:22 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:14.868 16:26:22 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.868 16:26:22 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:14.868 16:26:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:14.868 16:26:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:14.868 16:26:22 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.868 16:26:22 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.868 16:26:22 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:14.868 16:26:22 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:14.868 16:26:22 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:05:14.868 16:26:22 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.242 Waiting for block devices as requested 00:05:16.500 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:16.500 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:16.500 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:16.500 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:16.500 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:16.759 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:16.759 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:16.759 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:16.759 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:05:17.017 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:17.017 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:17.017 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:17.275 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:17.275 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:17.275 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:17.275 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:17.533 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:17.533 16:26:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:17.533 16:26:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1498 -- # grep 0000:0b:00.0/nvme/nvme 00:05:17.533 16:26:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:05:17.533 16:26:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:17.533 16:26:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:17.533 16:26:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:17.533 16:26:24 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:17.533 16:26:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:17.533 16:26:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:17.533 16:26:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:17.533 16:26:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:17.533 16:26:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:17.533 16:26:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:17.533 16:26:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:17.533 16:26:24 -- common/autotest_common.sh@1553 -- # continue 00:05:17.533 16:26:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:17.533 16:26:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.533 16:26:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.533 16:26:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:17.533 16:26:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:17.533 16:26:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.533 16:26:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:18.906 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:18.906 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:18.906 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:19.841 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:20.133 16:26:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:20.133 16:26:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.133 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.133 16:26:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:20.133 16:26:27 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:20.133 16:26:27 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:20.133 16:26:27 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:20.133 16:26:27 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:20.133 16:26:27 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:20.133 16:26:27 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:20.133 16:26:27 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:20.133 16:26:27 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.133 16:26:27 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:20.133 16:26:27 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:20.133 16:26:27 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:20.133 16:26:27 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:05:20.133 16:26:27 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:20.133 16:26:27 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:05:20.133 16:26:27 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:20.133 16:26:27 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:20.133 16:26:27 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:20.133 16:26:27 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:0b:00.0 00:05:20.133 16:26:27 -- common/autotest_common.sh@1588 -- # [[ -z 0000:0b:00.0 ]] 00:05:20.133 16:26:27 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1633511 00:05:20.133 16:26:27 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.133 16:26:27 -- common/autotest_common.sh@1594 -- # waitforlisten 1633511 00:05:20.133 16:26:27 -- common/autotest_common.sh@827 -- # '[' -z 1633511 ']' 00:05:20.133 16:26:27 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.133 16:26:27 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:20.133 16:26:27 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.133 16:26:27 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:20.133 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.134 [2024-05-15 16:26:27.296574] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:20.134 [2024-05-15 16:26:27.296681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633511 ] 00:05:20.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.393 [2024-05-15 16:26:27.365973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.393 [2024-05-15 16:26:27.448800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.650 16:26:27 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:20.650 16:26:27 -- common/autotest_common.sh@860 -- # return 0 00:05:20.650 16:26:27 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:20.650 16:26:27 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:20.650 16:26:27 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:05:23.934 nvme0n1 00:05:23.934 16:26:30 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:23.934 [2024-05-15 16:26:31.008936] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:23.934 [2024-05-15 16:26:31.008980] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:23.934 request: 00:05:23.934 { 00:05:23.934 "nvme_ctrlr_name": "nvme0", 00:05:23.934 "password": "test", 00:05:23.934 "method": "bdev_nvme_opal_revert", 00:05:23.934 "req_id": 1 00:05:23.934 } 00:05:23.934 Got JSON-RPC error response 00:05:23.934 response: 00:05:23.934 { 00:05:23.934 "code": -32603, 00:05:23.934 "message": "Internal error" 00:05:23.934 } 00:05:23.934 16:26:31 -- common/autotest_common.sh@1600 -- # true 00:05:23.934 16:26:31 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:23.934 16:26:31 -- common/autotest_common.sh@1604 -- # killprocess 1633511 00:05:23.934 16:26:31 -- common/autotest_common.sh@946 -- # '[' -z 1633511 ']' 00:05:23.934 16:26:31 -- common/autotest_common.sh@950 -- # kill -0 1633511 00:05:23.934 16:26:31 -- common/autotest_common.sh@951 -- # uname 00:05:23.934 16:26:31 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.934 16:26:31 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1633511 00:05:23.934 16:26:31 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.934 16:26:31 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.934 16:26:31 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1633511' 00:05:23.934 killing process with pid 1633511 00:05:23.934 16:26:31 -- common/autotest_common.sh@965 -- # kill 1633511 00:05:23.934 16:26:31 -- common/autotest_common.sh@970 -- # wait 1633511 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.934 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.935 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.193 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:24.194 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:25.565 16:26:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:25.565 16:26:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:25.565 16:26:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:25.565 16:26:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:25.565 16:26:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:25.565 16:26:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:25.565 16:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.565 16:26:32 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:25.565 16:26:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.565 16:26:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.565 16:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.565 ************************************ 00:05:25.565 START TEST env 00:05:25.565 ************************************ 00:05:25.565 16:26:32 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:25.824 * Looking for test storage... 00:05:25.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:25.824 16:26:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:25.824 16:26:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.824 16:26:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.824 16:26:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.824 ************************************ 00:05:25.824 START TEST env_memory 00:05:25.824 ************************************ 00:05:25.824 16:26:32 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:25.824 00:05:25.824 00:05:25.824 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.824 http://cunit.sourceforge.net/ 00:05:25.824 00:05:25.824 00:05:25.824 Suite: memory 00:05:25.824 Test: alloc and free memory map ...[2024-05-15 16:26:32.864254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.824 passed 00:05:25.824 Test: mem map translation ...[2024-05-15 16:26:32.885280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.824 [2024-05-15 16:26:32.885302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.824 [2024-05-15 16:26:32.885345] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.824 [2024-05-15 16:26:32.885357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.824 passed 00:05:25.824 Test: mem map registration ...[2024-05-15 16:26:32.926682] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:25.824 [2024-05-15 16:26:32.926702] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:25.824 passed 00:05:25.824 Test: mem map adjacent registrations ...passed 00:05:25.824 00:05:25.824 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.824 suites 1 1 n/a 0 0 00:05:25.824 tests 4 4 4 0 0 00:05:25.824 asserts 152 152 152 0 n/a 00:05:25.824 00:05:25.824 Elapsed time = 0.142 seconds 00:05:25.824 00:05:25.824 real 0m0.150s 00:05:25.824 user 0m0.138s 00:05:25.824 sys 0m0.011s 00:05:25.824 16:26:32 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.824 16:26:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.824 ************************************ 00:05:25.824 END TEST env_memory 00:05:25.824 ************************************ 00:05:25.824 16:26:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.824 16:26:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.824 16:26:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.824 16:26:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.824 ************************************ 00:05:25.824 START TEST env_vtophys 00:05:25.824 ************************************ 00:05:25.824 16:26:33 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.824 EAL: lib.eal log level changed from notice to debug 00:05:25.824 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.824 EAL: Detected lcore 1 as core 1 on socket 0 00:05:25.824 EAL: Detected lcore 2 as core 2 on socket 0 00:05:25.824 EAL: Detected lcore 3 as core 3 on socket 0 00:05:25.824 EAL: Detected lcore 4 as core 4 on socket 0 00:05:25.824 EAL: Detected lcore 5 as core 5 on socket 0 00:05:25.824 EAL: Detected lcore 6 as core 8 on socket 0 00:05:25.824 EAL: Detected lcore 7 as core 9 on socket 0 00:05:25.824 EAL: Detected lcore 8 as core 10 on socket 0 00:05:25.824 EAL: Detected lcore 9 as core 11 on socket 0 00:05:25.824 EAL: Detected lcore 10 as core 12 on socket 0 00:05:25.824 EAL: Detected lcore 11 as core 13 on socket 0 00:05:25.824 EAL: Detected lcore 12 as core 0 on socket 1 00:05:25.824 EAL: Detected lcore 13 as core 1 on socket 1 00:05:25.824 EAL: Detected lcore 14 as core 2 on socket 1 00:05:25.824 EAL: Detected lcore 15 as core 3 on socket 1 00:05:25.824 EAL: Detected lcore 16 as core 4 on socket 1 00:05:25.824 EAL: Detected lcore 17 as core 5 on socket 1 00:05:25.824 EAL: Detected lcore 18 as core 8 on socket 1 00:05:25.824 EAL: Detected lcore 19 as core 9 on socket 1 00:05:25.824 EAL: Detected lcore 20 as core 10 on socket 1 00:05:25.824 EAL: Detected lcore 21 as core 11 on socket 1 00:05:25.824 EAL: Detected lcore 22 as core 12 on socket 1 00:05:25.824 EAL: Detected lcore 23 as core 13 on socket 1 00:05:25.824 EAL: Detected lcore 24 as core 0 on socket 0 00:05:25.824 EAL: Detected lcore 25 as core 1 on socket 0 00:05:25.824 EAL: Detected lcore 26 as core 2 on socket 0 00:05:25.824 EAL: Detected lcore 27 as core 3 on socket 0 00:05:25.824 EAL: Detected lcore 28 as core 4 on socket 0 00:05:25.824 EAL: Detected lcore 29 as core 5 on socket 0 00:05:25.824 EAL: Detected lcore 30 as core 8 on socket 0 00:05:25.824 EAL: Detected lcore 31 as core 9 on socket 0 00:05:25.824 EAL: Detected lcore 32 as core 10 on socket 0 00:05:25.824 EAL: Detected lcore 33 as core 11 on socket 0 00:05:25.824 EAL: Detected lcore 34 as core 12 on socket 0 00:05:25.824 EAL: Detected lcore 35 as core 13 on socket 0 00:05:25.824 EAL: Detected lcore 36 as core 0 on socket 1 00:05:25.824 EAL: Detected lcore 37 as core 1 on socket 1 00:05:25.824 EAL: Detected lcore 38 as core 2 on socket 1 00:05:25.824 EAL: Detected lcore 39 as core 3 on socket 1 00:05:25.824 EAL: Detected lcore 40 as core 4 on socket 1 00:05:25.824 EAL: Detected lcore 41 as core 5 on socket 1 00:05:25.824 EAL: Detected lcore 42 as core 8 on socket 1 00:05:25.824 EAL: Detected lcore 43 as core 9 on socket 1 00:05:25.824 EAL: Detected lcore 44 as core 10 on socket 1 00:05:25.824 EAL: Detected lcore 45 as core 11 on socket 1 00:05:25.824 EAL: Detected lcore 46 as core 12 on socket 1 00:05:25.824 EAL: Detected lcore 47 as core 13 on socket 1 00:05:25.824 EAL: Maximum logical cores by configuration: 128 00:05:25.824 EAL: Detected CPU lcores: 48 00:05:25.824 EAL: Detected NUMA nodes: 2 00:05:25.825 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:25.825 EAL: Detected shared linkage of DPDK 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:25.825 EAL: Registered [vdev] bus. 00:05:25.825 EAL: bus.vdev log level changed from disabled to notice 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:25.825 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:25.825 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:25.825 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:25.825 EAL: No shared files mode enabled, IPC will be disabled 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Bus pci wants IOVA as 'DC' 00:05:26.084 EAL: Bus vdev wants IOVA as 'DC' 00:05:26.084 EAL: Buses did not request a specific IOVA mode. 00:05:26.084 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:26.084 EAL: Selected IOVA mode 'VA' 00:05:26.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.084 EAL: Probing VFIO support... 00:05:26.084 EAL: IOMMU type 1 (Type 1) is supported 00:05:26.084 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:26.084 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:26.084 EAL: VFIO support initialized 00:05:26.084 EAL: Ask a virtual area of 0x2e000 bytes 00:05:26.084 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:26.084 EAL: Setting up physically contiguous memory... 00:05:26.084 EAL: Setting maximum number of open files to 524288 00:05:26.084 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:26.084 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:26.084 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:26.084 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:26.084 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.084 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:26.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.084 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.084 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:26.084 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:26.084 EAL: Hugepages will be freed exactly as allocated. 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: TSC frequency is ~2700000 KHz 00:05:26.084 EAL: Main lcore 0 is ready (tid=7f3cc0dcea00;cpuset=[0]) 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 0 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 2MB 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:26.084 EAL: Mem event callback 'spdk:(nil)' registered 00:05:26.084 00:05:26.084 00:05:26.084 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.084 http://cunit.sourceforge.net/ 00:05:26.084 00:05:26.084 00:05:26.084 Suite: components_suite 00:05:26.084 Test: vtophys_malloc_test ...passed 00:05:26.084 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 4MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 4MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 6MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 6MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 10MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 10MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 18MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 18MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 34MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 34MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 66MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 66MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.084 EAL: Restoring previous memory policy: 4 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was expanded by 130MB 00:05:26.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.084 EAL: request: mp_malloc_sync 00:05:26.084 EAL: No shared files mode enabled, IPC is disabled 00:05:26.084 EAL: Heap on socket 0 was shrunk by 130MB 00:05:26.084 EAL: Trying to obtain current memory policy. 00:05:26.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.342 EAL: Restoring previous memory policy: 4 00:05:26.342 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.342 EAL: request: mp_malloc_sync 00:05:26.342 EAL: No shared files mode enabled, IPC is disabled 00:05:26.342 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.342 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.342 EAL: request: mp_malloc_sync 00:05:26.342 EAL: No shared files mode enabled, IPC is disabled 00:05:26.342 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.342 EAL: Trying to obtain current memory policy. 00:05:26.342 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.601 EAL: Restoring previous memory policy: 4 00:05:26.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.601 EAL: request: mp_malloc_sync 00:05:26.601 EAL: No shared files mode enabled, IPC is disabled 00:05:26.601 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.601 EAL: request: mp_malloc_sync 00:05:26.601 EAL: No shared files mode enabled, IPC is disabled 00:05:26.601 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.601 EAL: Trying to obtain current memory policy. 00:05:26.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.167 EAL: Restoring previous memory policy: 4 00:05:27.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.167 EAL: request: mp_malloc_sync 00:05:27.167 EAL: No shared files mode enabled, IPC is disabled 00:05:27.167 EAL: Heap on socket 0 was expanded by 1026MB 00:05:27.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.425 EAL: request: mp_malloc_sync 00:05:27.425 EAL: No shared files mode enabled, IPC is disabled 00:05:27.425 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:27.425 passed 00:05:27.425 00:05:27.425 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.425 suites 1 1 n/a 0 0 00:05:27.425 tests 2 2 2 0 0 00:05:27.425 asserts 497 497 497 0 n/a 00:05:27.425 00:05:27.425 Elapsed time = 1.378 seconds 00:05:27.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.425 EAL: request: mp_malloc_sync 00:05:27.425 EAL: No shared files mode enabled, IPC is disabled 00:05:27.425 EAL: Heap on socket 0 was shrunk by 2MB 00:05:27.425 EAL: No shared files mode enabled, IPC is disabled 00:05:27.425 EAL: No shared files mode enabled, IPC is disabled 00:05:27.425 EAL: No shared files mode enabled, IPC is disabled 00:05:27.425 00:05:27.425 real 0m1.508s 00:05:27.425 user 0m0.850s 00:05:27.425 sys 0m0.620s 00:05:27.425 16:26:34 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.425 16:26:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:27.425 ************************************ 00:05:27.425 END TEST env_vtophys 00:05:27.425 ************************************ 00:05:27.425 16:26:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:27.425 16:26:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.425 16:26:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.425 16:26:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.425 ************************************ 00:05:27.425 START TEST env_pci 00:05:27.425 ************************************ 00:05:27.425 16:26:34 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:27.425 00:05:27.425 00:05:27.425 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.425 http://cunit.sourceforge.net/ 00:05:27.425 00:05:27.425 00:05:27.425 Suite: pci 00:05:27.425 Test: pci_hook ...[2024-05-15 16:26:34.596765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1634399 has claimed it 00:05:27.425 EAL: Cannot find device (10000:00:01.0) 00:05:27.425 EAL: Failed to attach device on primary process 00:05:27.425 passed 00:05:27.425 00:05:27.425 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.425 suites 1 1 n/a 0 0 00:05:27.425 tests 1 1 1 0 0 00:05:27.425 asserts 25 25 25 0 n/a 00:05:27.425 00:05:27.425 Elapsed time = 0.025 seconds 00:05:27.425 00:05:27.425 real 0m0.036s 00:05:27.425 user 0m0.008s 00:05:27.425 sys 0m0.029s 00:05:27.425 16:26:34 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.426 16:26:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.426 ************************************ 00:05:27.426 END TEST env_pci 00:05:27.426 ************************************ 00:05:27.426 16:26:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.426 16:26:34 env -- env/env.sh@15 -- # uname 00:05:27.426 16:26:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.426 16:26:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.426 16:26:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.426 16:26:34 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:27.426 16:26:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.426 16:26:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.682 ************************************ 00:05:27.682 START TEST env_dpdk_post_init 00:05:27.682 ************************************ 00:05:27.682 16:26:34 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.682 EAL: Detected CPU lcores: 48 00:05:27.682 EAL: Detected NUMA nodes: 2 00:05:27.682 EAL: Detected shared linkage of DPDK 00:05:27.682 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.682 EAL: Selected IOVA mode 'VA' 00:05:27.682 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.682 EAL: VFIO support initialized 00:05:27.682 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.682 EAL: Using IOMMU type 1 (Type 1) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:27.682 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:28.613 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:28.613 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:31.884 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:05:31.884 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:05:31.884 Starting DPDK initialization... 00:05:31.884 Starting SPDK post initialization... 00:05:31.884 SPDK NVMe probe 00:05:31.884 Attaching to 0000:0b:00.0 00:05:31.884 Attached to 0000:0b:00.0 00:05:31.884 Cleaning up... 00:05:31.884 00:05:31.884 real 0m4.376s 00:05:31.884 user 0m3.233s 00:05:31.884 sys 0m0.201s 00:05:31.884 16:26:39 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.884 16:26:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 ************************************ 00:05:31.884 END TEST env_dpdk_post_init 00:05:31.884 ************************************ 00:05:31.884 16:26:39 env -- env/env.sh@26 -- # uname 00:05:31.884 16:26:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.884 16:26:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.884 16:26:39 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.884 16:26:39 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.884 16:26:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 ************************************ 00:05:31.884 START TEST env_mem_callbacks 00:05:31.884 ************************************ 00:05:31.884 16:26:39 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.142 EAL: Detected CPU lcores: 48 00:05:32.142 EAL: Detected NUMA nodes: 2 00:05:32.142 EAL: Detected shared linkage of DPDK 00:05:32.142 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.142 EAL: Selected IOVA mode 'VA' 00:05:32.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.142 EAL: VFIO support initialized 00:05:32.142 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.142 00:05:32.142 00:05:32.142 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.142 http://cunit.sourceforge.net/ 00:05:32.142 00:05:32.142 00:05:32.142 Suite: memory 00:05:32.142 Test: test ... 00:05:32.142 register 0x200000200000 2097152 00:05:32.142 malloc 3145728 00:05:32.142 register 0x200000400000 4194304 00:05:32.142 buf 0x200000500000 len 3145728 PASSED 00:05:32.142 malloc 64 00:05:32.142 buf 0x2000004fff40 len 64 PASSED 00:05:32.142 malloc 4194304 00:05:32.142 register 0x200000800000 6291456 00:05:32.142 buf 0x200000a00000 len 4194304 PASSED 00:05:32.142 free 0x200000500000 3145728 00:05:32.142 free 0x2000004fff40 64 00:05:32.142 unregister 0x200000400000 4194304 PASSED 00:05:32.142 free 0x200000a00000 4194304 00:05:32.142 unregister 0x200000800000 6291456 PASSED 00:05:32.142 malloc 8388608 00:05:32.142 register 0x200000400000 10485760 00:05:32.142 buf 0x200000600000 len 8388608 PASSED 00:05:32.142 free 0x200000600000 8388608 00:05:32.142 unregister 0x200000400000 10485760 PASSED 00:05:32.142 passed 00:05:32.142 00:05:32.142 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.142 suites 1 1 n/a 0 0 00:05:32.142 tests 1 1 1 0 0 00:05:32.142 asserts 15 15 15 0 n/a 00:05:32.142 00:05:32.142 Elapsed time = 0.005 seconds 00:05:32.142 00:05:32.142 real 0m0.050s 00:05:32.142 user 0m0.015s 00:05:32.142 sys 0m0.035s 00:05:32.142 16:26:39 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.142 16:26:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:32.142 ************************************ 00:05:32.142 END TEST env_mem_callbacks 00:05:32.142 ************************************ 00:05:32.142 00:05:32.142 real 0m6.420s 00:05:32.142 user 0m4.347s 00:05:32.142 sys 0m1.099s 00:05:32.142 16:26:39 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.142 16:26:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.142 ************************************ 00:05:32.142 END TEST env 00:05:32.142 ************************************ 00:05:32.142 16:26:39 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.142 16:26:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.142 16:26:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.142 16:26:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.142 ************************************ 00:05:32.142 START TEST rpc 00:05:32.142 ************************************ 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.142 * Looking for test storage... 00:05:32.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.142 16:26:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1635059 00:05:32.142 16:26:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:32.142 16:26:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.142 16:26:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1635059 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@827 -- # '[' -z 1635059 ']' 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:32.142 16:26:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.142 [2024-05-15 16:26:39.324385] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:32.142 [2024-05-15 16:26:39.324464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635059 ] 00:05:32.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.399 [2024-05-15 16:26:39.390392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.399 [2024-05-15 16:26:39.474558] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.399 [2024-05-15 16:26:39.474616] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1635059' to capture a snapshot of events at runtime. 00:05:32.400 [2024-05-15 16:26:39.474644] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.400 [2024-05-15 16:26:39.474656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.400 [2024-05-15 16:26:39.474666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1635059 for offline analysis/debug. 00:05:32.400 [2024-05-15 16:26:39.474707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.657 16:26:39 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.657 16:26:39 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:32.657 16:26:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.657 16:26:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.657 16:26:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.657 16:26:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.657 16:26:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.657 16:26:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.657 16:26:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.657 ************************************ 00:05:32.657 START TEST rpc_integrity 00:05:32.657 ************************************ 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.657 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.657 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.657 { 00:05:32.657 "name": "Malloc0", 00:05:32.657 "aliases": [ 00:05:32.657 "dfc8c3e1-7510-4277-9bf7-871fe6bc854c" 00:05:32.657 ], 00:05:32.657 "product_name": "Malloc disk", 00:05:32.657 "block_size": 512, 00:05:32.657 "num_blocks": 16384, 00:05:32.657 "uuid": "dfc8c3e1-7510-4277-9bf7-871fe6bc854c", 00:05:32.657 "assigned_rate_limits": { 00:05:32.657 "rw_ios_per_sec": 0, 00:05:32.657 "rw_mbytes_per_sec": 0, 00:05:32.658 "r_mbytes_per_sec": 0, 00:05:32.658 "w_mbytes_per_sec": 0 00:05:32.658 }, 00:05:32.658 "claimed": false, 00:05:32.658 "zoned": false, 00:05:32.658 "supported_io_types": { 00:05:32.658 "read": true, 00:05:32.658 "write": true, 00:05:32.658 "unmap": true, 00:05:32.658 "write_zeroes": true, 00:05:32.658 "flush": true, 00:05:32.658 "reset": true, 00:05:32.658 "compare": false, 00:05:32.658 "compare_and_write": false, 00:05:32.658 "abort": true, 00:05:32.658 "nvme_admin": false, 00:05:32.658 "nvme_io": false 00:05:32.658 }, 00:05:32.658 "memory_domains": [ 00:05:32.658 { 00:05:32.658 "dma_device_id": "system", 00:05:32.658 "dma_device_type": 1 00:05:32.658 }, 00:05:32.658 { 00:05:32.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.658 "dma_device_type": 2 00:05:32.658 } 00:05:32.658 ], 00:05:32.658 "driver_specific": {} 00:05:32.658 } 00:05:32.658 ]' 00:05:32.658 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.658 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.658 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:32.658 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.658 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.658 [2024-05-15 16:26:39.875661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:32.658 [2024-05-15 16:26:39.875706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.658 [2024-05-15 16:26:39.875730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa423a0 00:05:32.658 [2024-05-15 16:26:39.875745] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.658 [2024-05-15 16:26:39.877287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.658 [2024-05-15 16:26:39.877312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.658 Passthru0 00:05:32.658 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.658 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.658 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.658 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.916 { 00:05:32.916 "name": "Malloc0", 00:05:32.916 "aliases": [ 00:05:32.916 "dfc8c3e1-7510-4277-9bf7-871fe6bc854c" 00:05:32.916 ], 00:05:32.916 "product_name": "Malloc disk", 00:05:32.916 "block_size": 512, 00:05:32.916 "num_blocks": 16384, 00:05:32.916 "uuid": "dfc8c3e1-7510-4277-9bf7-871fe6bc854c", 00:05:32.916 "assigned_rate_limits": { 00:05:32.916 "rw_ios_per_sec": 0, 00:05:32.916 "rw_mbytes_per_sec": 0, 00:05:32.916 "r_mbytes_per_sec": 0, 00:05:32.916 "w_mbytes_per_sec": 0 00:05:32.916 }, 00:05:32.916 "claimed": true, 00:05:32.916 "claim_type": "exclusive_write", 00:05:32.916 "zoned": false, 00:05:32.916 "supported_io_types": { 00:05:32.916 "read": true, 00:05:32.916 "write": true, 00:05:32.916 "unmap": true, 00:05:32.916 "write_zeroes": true, 00:05:32.916 "flush": true, 00:05:32.916 "reset": true, 00:05:32.916 "compare": false, 00:05:32.916 "compare_and_write": false, 00:05:32.916 "abort": true, 00:05:32.916 "nvme_admin": false, 00:05:32.916 "nvme_io": false 00:05:32.916 }, 00:05:32.916 "memory_domains": [ 00:05:32.916 { 00:05:32.916 "dma_device_id": "system", 00:05:32.916 "dma_device_type": 1 00:05:32.916 }, 00:05:32.916 { 00:05:32.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.916 "dma_device_type": 2 00:05:32.916 } 00:05:32.916 ], 00:05:32.916 "driver_specific": {} 00:05:32.916 }, 00:05:32.916 { 00:05:32.916 "name": "Passthru0", 00:05:32.916 "aliases": [ 00:05:32.916 "a9d25873-7ff3-51be-bfb0-e57030064e38" 00:05:32.916 ], 00:05:32.916 "product_name": "passthru", 00:05:32.916 "block_size": 512, 00:05:32.916 "num_blocks": 16384, 00:05:32.916 "uuid": "a9d25873-7ff3-51be-bfb0-e57030064e38", 00:05:32.916 "assigned_rate_limits": { 00:05:32.916 "rw_ios_per_sec": 0, 00:05:32.916 "rw_mbytes_per_sec": 0, 00:05:32.916 "r_mbytes_per_sec": 0, 00:05:32.916 "w_mbytes_per_sec": 0 00:05:32.916 }, 00:05:32.916 "claimed": false, 00:05:32.916 "zoned": false, 00:05:32.916 "supported_io_types": { 00:05:32.916 "read": true, 00:05:32.916 "write": true, 00:05:32.916 "unmap": true, 00:05:32.916 "write_zeroes": true, 00:05:32.916 "flush": true, 00:05:32.916 "reset": true, 00:05:32.916 "compare": false, 00:05:32.916 "compare_and_write": false, 00:05:32.916 "abort": true, 00:05:32.916 "nvme_admin": false, 00:05:32.916 "nvme_io": false 00:05:32.916 }, 00:05:32.916 "memory_domains": [ 00:05:32.916 { 00:05:32.916 "dma_device_id": "system", 00:05:32.916 "dma_device_type": 1 00:05:32.916 }, 00:05:32.916 { 00:05:32.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.916 "dma_device_type": 2 00:05:32.916 } 00:05:32.916 ], 00:05:32.916 "driver_specific": { 00:05:32.916 "passthru": { 00:05:32.916 "name": "Passthru0", 00:05:32.916 "base_bdev_name": "Malloc0" 00:05:32.916 } 00:05:32.916 } 00:05:32.916 } 00:05:32.916 ]' 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.916 16:26:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.916 00:05:32.916 real 0m0.229s 00:05:32.916 user 0m0.146s 00:05:32.916 sys 0m0.025s 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.916 16:26:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 ************************************ 00:05:32.916 END TEST rpc_integrity 00:05:32.916 ************************************ 00:05:32.916 16:26:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:32.916 16:26:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.916 16:26:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.916 16:26:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 ************************************ 00:05:32.916 START TEST rpc_plugins 00:05:32.916 ************************************ 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:32.916 { 00:05:32.916 "name": "Malloc1", 00:05:32.916 "aliases": [ 00:05:32.916 "513363ba-dbe8-4239-ac1c-053580417a13" 00:05:32.916 ], 00:05:32.916 "product_name": "Malloc disk", 00:05:32.916 "block_size": 4096, 00:05:32.916 "num_blocks": 256, 00:05:32.916 "uuid": "513363ba-dbe8-4239-ac1c-053580417a13", 00:05:32.916 "assigned_rate_limits": { 00:05:32.916 "rw_ios_per_sec": 0, 00:05:32.916 "rw_mbytes_per_sec": 0, 00:05:32.916 "r_mbytes_per_sec": 0, 00:05:32.916 "w_mbytes_per_sec": 0 00:05:32.916 }, 00:05:32.916 "claimed": false, 00:05:32.916 "zoned": false, 00:05:32.916 "supported_io_types": { 00:05:32.916 "read": true, 00:05:32.916 "write": true, 00:05:32.916 "unmap": true, 00:05:32.916 "write_zeroes": true, 00:05:32.916 "flush": true, 00:05:32.916 "reset": true, 00:05:32.916 "compare": false, 00:05:32.916 "compare_and_write": false, 00:05:32.916 "abort": true, 00:05:32.916 "nvme_admin": false, 00:05:32.916 "nvme_io": false 00:05:32.916 }, 00:05:32.916 "memory_domains": [ 00:05:32.916 { 00:05:32.916 "dma_device_id": "system", 00:05:32.916 "dma_device_type": 1 00:05:32.916 }, 00:05:32.916 { 00:05:32.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.916 "dma_device_type": 2 00:05:32.916 } 00:05:32.916 ], 00:05:32.916 "driver_specific": {} 00:05:32.916 } 00:05:32.916 ]' 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:32.916 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.174 16:26:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.174 00:05:33.174 real 0m0.115s 00:05:33.174 user 0m0.072s 00:05:33.174 sys 0m0.013s 00:05:33.174 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.174 16:26:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.174 ************************************ 00:05:33.174 END TEST rpc_plugins 00:05:33.174 ************************************ 00:05:33.174 16:26:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.174 16:26:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.174 16:26:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.174 16:26:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.174 ************************************ 00:05:33.174 START TEST rpc_trace_cmd_test 00:05:33.174 ************************************ 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.174 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1635059", 00:05:33.174 "tpoint_group_mask": "0x8", 00:05:33.174 "iscsi_conn": { 00:05:33.174 "mask": "0x2", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "scsi": { 00:05:33.174 "mask": "0x4", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "bdev": { 00:05:33.174 "mask": "0x8", 00:05:33.174 "tpoint_mask": "0xffffffffffffffff" 00:05:33.174 }, 00:05:33.174 "nvmf_rdma": { 00:05:33.174 "mask": "0x10", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "nvmf_tcp": { 00:05:33.174 "mask": "0x20", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "ftl": { 00:05:33.174 "mask": "0x40", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "blobfs": { 00:05:33.174 "mask": "0x80", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "dsa": { 00:05:33.174 "mask": "0x200", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "thread": { 00:05:33.174 "mask": "0x400", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "nvme_pcie": { 00:05:33.174 "mask": "0x800", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "iaa": { 00:05:33.174 "mask": "0x1000", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "nvme_tcp": { 00:05:33.174 "mask": "0x2000", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "bdev_nvme": { 00:05:33.174 "mask": "0x4000", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 }, 00:05:33.174 "sock": { 00:05:33.174 "mask": "0x8000", 00:05:33.174 "tpoint_mask": "0x0" 00:05:33.174 } 00:05:33.174 }' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.174 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.433 16:26:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.433 00:05:33.433 real 0m0.201s 00:05:33.433 user 0m0.179s 00:05:33.433 sys 0m0.013s 00:05:33.433 16:26:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.433 16:26:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 ************************************ 00:05:33.433 END TEST rpc_trace_cmd_test 00:05:33.433 ************************************ 00:05:33.433 16:26:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.433 16:26:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.433 16:26:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.433 16:26:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.433 16:26:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.433 16:26:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 ************************************ 00:05:33.433 START TEST rpc_daemon_integrity 00:05:33.433 ************************************ 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.433 { 00:05:33.433 "name": "Malloc2", 00:05:33.433 "aliases": [ 00:05:33.433 "fea93144-370c-4527-afff-ecc03ba087d3" 00:05:33.433 ], 00:05:33.433 "product_name": "Malloc disk", 00:05:33.433 "block_size": 512, 00:05:33.433 "num_blocks": 16384, 00:05:33.433 "uuid": "fea93144-370c-4527-afff-ecc03ba087d3", 00:05:33.433 "assigned_rate_limits": { 00:05:33.433 "rw_ios_per_sec": 0, 00:05:33.433 "rw_mbytes_per_sec": 0, 00:05:33.433 "r_mbytes_per_sec": 0, 00:05:33.433 "w_mbytes_per_sec": 0 00:05:33.433 }, 00:05:33.433 "claimed": false, 00:05:33.433 "zoned": false, 00:05:33.433 "supported_io_types": { 00:05:33.433 "read": true, 00:05:33.433 "write": true, 00:05:33.433 "unmap": true, 00:05:33.433 "write_zeroes": true, 00:05:33.433 "flush": true, 00:05:33.433 "reset": true, 00:05:33.433 "compare": false, 00:05:33.433 "compare_and_write": false, 00:05:33.433 "abort": true, 00:05:33.433 "nvme_admin": false, 00:05:33.433 "nvme_io": false 00:05:33.433 }, 00:05:33.433 "memory_domains": [ 00:05:33.433 { 00:05:33.433 "dma_device_id": "system", 00:05:33.433 "dma_device_type": 1 00:05:33.433 }, 00:05:33.433 { 00:05:33.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.433 "dma_device_type": 2 00:05:33.433 } 00:05:33.433 ], 00:05:33.433 "driver_specific": {} 00:05:33.433 } 00:05:33.433 ]' 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 [2024-05-15 16:26:40.574244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.433 [2024-05-15 16:26:40.574300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.433 [2024-05-15 16:26:40.574341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x889940 00:05:33.433 [2024-05-15 16:26:40.574357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.433 [2024-05-15 16:26:40.575755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.433 [2024-05-15 16:26:40.575783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.433 Passthru0 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.433 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.433 { 00:05:33.433 "name": "Malloc2", 00:05:33.433 "aliases": [ 00:05:33.433 "fea93144-370c-4527-afff-ecc03ba087d3" 00:05:33.433 ], 00:05:33.433 "product_name": "Malloc disk", 00:05:33.433 "block_size": 512, 00:05:33.433 "num_blocks": 16384, 00:05:33.433 "uuid": "fea93144-370c-4527-afff-ecc03ba087d3", 00:05:33.433 "assigned_rate_limits": { 00:05:33.433 "rw_ios_per_sec": 0, 00:05:33.433 "rw_mbytes_per_sec": 0, 00:05:33.433 "r_mbytes_per_sec": 0, 00:05:33.433 "w_mbytes_per_sec": 0 00:05:33.433 }, 00:05:33.433 "claimed": true, 00:05:33.433 "claim_type": "exclusive_write", 00:05:33.433 "zoned": false, 00:05:33.433 "supported_io_types": { 00:05:33.433 "read": true, 00:05:33.433 "write": true, 00:05:33.433 "unmap": true, 00:05:33.433 "write_zeroes": true, 00:05:33.433 "flush": true, 00:05:33.433 "reset": true, 00:05:33.433 "compare": false, 00:05:33.433 "compare_and_write": false, 00:05:33.433 "abort": true, 00:05:33.433 "nvme_admin": false, 00:05:33.433 "nvme_io": false 00:05:33.433 }, 00:05:33.433 "memory_domains": [ 00:05:33.433 { 00:05:33.433 "dma_device_id": "system", 00:05:33.433 "dma_device_type": 1 00:05:33.433 }, 00:05:33.433 { 00:05:33.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.433 "dma_device_type": 2 00:05:33.433 } 00:05:33.433 ], 00:05:33.433 "driver_specific": {} 00:05:33.433 }, 00:05:33.433 { 00:05:33.433 "name": "Passthru0", 00:05:33.433 "aliases": [ 00:05:33.433 "db8c628f-2453-5515-ade8-c7cc5a13eff0" 00:05:33.433 ], 00:05:33.433 "product_name": "passthru", 00:05:33.433 "block_size": 512, 00:05:33.433 "num_blocks": 16384, 00:05:33.433 "uuid": "db8c628f-2453-5515-ade8-c7cc5a13eff0", 00:05:33.433 "assigned_rate_limits": { 00:05:33.433 "rw_ios_per_sec": 0, 00:05:33.433 "rw_mbytes_per_sec": 0, 00:05:33.434 "r_mbytes_per_sec": 0, 00:05:33.434 "w_mbytes_per_sec": 0 00:05:33.434 }, 00:05:33.434 "claimed": false, 00:05:33.434 "zoned": false, 00:05:33.434 "supported_io_types": { 00:05:33.434 "read": true, 00:05:33.434 "write": true, 00:05:33.434 "unmap": true, 00:05:33.434 "write_zeroes": true, 00:05:33.434 "flush": true, 00:05:33.434 "reset": true, 00:05:33.434 "compare": false, 00:05:33.434 "compare_and_write": false, 00:05:33.434 "abort": true, 00:05:33.434 "nvme_admin": false, 00:05:33.434 "nvme_io": false 00:05:33.434 }, 00:05:33.434 "memory_domains": [ 00:05:33.434 { 00:05:33.434 "dma_device_id": "system", 00:05:33.434 "dma_device_type": 1 00:05:33.434 }, 00:05:33.434 { 00:05:33.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.434 "dma_device_type": 2 00:05:33.434 } 00:05:33.434 ], 00:05:33.434 "driver_specific": { 00:05:33.434 "passthru": { 00:05:33.434 "name": "Passthru0", 00:05:33.434 "base_bdev_name": "Malloc2" 00:05:33.434 } 00:05:33.434 } 00:05:33.434 } 00:05:33.434 ]' 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.434 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.691 16:26:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.691 00:05:33.691 real 0m0.231s 00:05:33.691 user 0m0.154s 00:05:33.691 sys 0m0.023s 00:05:33.691 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.691 16:26:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.691 ************************************ 00:05:33.691 END TEST rpc_daemon_integrity 00:05:33.691 ************************************ 00:05:33.691 16:26:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.691 16:26:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1635059 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@946 -- # '[' -z 1635059 ']' 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@950 -- # kill -0 1635059 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@951 -- # uname 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1635059 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1635059' 00:05:33.691 killing process with pid 1635059 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@965 -- # kill 1635059 00:05:33.691 16:26:40 rpc -- common/autotest_common.sh@970 -- # wait 1635059 00:05:33.948 00:05:33.948 real 0m1.930s 00:05:33.948 user 0m2.403s 00:05:33.948 sys 0m0.618s 00:05:33.948 16:26:41 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.948 16:26:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.948 ************************************ 00:05:33.948 END TEST rpc 00:05:33.948 ************************************ 00:05:34.206 16:26:41 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.206 16:26:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.206 16:26:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.206 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.206 ************************************ 00:05:34.206 START TEST skip_rpc 00:05:34.206 ************************************ 00:05:34.206 16:26:41 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.206 * Looking for test storage... 00:05:34.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:34.206 16:26:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.206 16:26:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.206 16:26:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.206 16:26:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.206 16:26:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.206 16:26:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.206 ************************************ 00:05:34.206 START TEST skip_rpc 00:05:34.206 ************************************ 00:05:34.206 16:26:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:34.206 16:26:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1635495 00:05:34.206 16:26:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.206 16:26:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.206 16:26:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.206 [2024-05-15 16:26:41.345630] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:34.206 [2024-05-15 16:26:41.345691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635495 ] 00:05:34.206 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.206 [2024-05-15 16:26:41.411171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.464 [2024-05-15 16:26:41.506590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1635495 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1635495 ']' 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1635495 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1635495 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1635495' 00:05:39.725 killing process with pid 1635495 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1635495 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1635495 00:05:39.725 00:05:39.725 real 0m5.450s 00:05:39.725 user 0m5.112s 00:05:39.725 sys 0m0.346s 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.725 16:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.725 ************************************ 00:05:39.725 END TEST skip_rpc 00:05:39.725 ************************************ 00:05:39.725 16:26:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.725 16:26:46 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.725 16:26:46 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.725 16:26:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.725 ************************************ 00:05:39.725 START TEST skip_rpc_with_json 00:05:39.725 ************************************ 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1636177 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1636177 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1636177 ']' 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.725 16:26:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.725 [2024-05-15 16:26:46.847624] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:39.725 [2024-05-15 16:26:46.847702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636177 ] 00:05:39.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.725 [2024-05-15 16:26:46.916888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.991 [2024-05-15 16:26:47.005062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 [2024-05-15 16:26:47.265192] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.294 request: 00:05:40.294 { 00:05:40.294 "trtype": "tcp", 00:05:40.294 "method": "nvmf_get_transports", 00:05:40.294 "req_id": 1 00:05:40.294 } 00:05:40.294 Got JSON-RPC error response 00:05:40.294 response: 00:05:40.294 { 00:05:40.294 "code": -19, 00:05:40.294 "message": "No such device" 00:05:40.294 } 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 [2024-05-15 16:26:47.273333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.294 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.294 { 00:05:40.294 "subsystems": [ 00:05:40.294 { 00:05:40.294 "subsystem": "vfio_user_target", 00:05:40.294 "config": null 00:05:40.294 }, 00:05:40.294 { 00:05:40.294 "subsystem": "keyring", 00:05:40.294 "config": [] 00:05:40.294 }, 00:05:40.294 { 00:05:40.294 "subsystem": "iobuf", 00:05:40.294 "config": [ 00:05:40.294 { 00:05:40.294 "method": "iobuf_set_options", 00:05:40.294 "params": { 00:05:40.294 "small_pool_count": 8192, 00:05:40.294 "large_pool_count": 1024, 00:05:40.294 "small_bufsize": 8192, 00:05:40.294 "large_bufsize": 135168 00:05:40.294 } 00:05:40.294 } 00:05:40.294 ] 00:05:40.294 }, 00:05:40.294 { 00:05:40.294 "subsystem": "sock", 00:05:40.294 "config": [ 00:05:40.294 { 00:05:40.294 "method": "sock_impl_set_options", 00:05:40.294 "params": { 00:05:40.294 "impl_name": "posix", 00:05:40.294 "recv_buf_size": 2097152, 00:05:40.294 "send_buf_size": 2097152, 00:05:40.294 "enable_recv_pipe": true, 00:05:40.294 "enable_quickack": false, 00:05:40.294 "enable_placement_id": 0, 00:05:40.294 "enable_zerocopy_send_server": true, 00:05:40.294 "enable_zerocopy_send_client": false, 00:05:40.294 "zerocopy_threshold": 0, 00:05:40.294 "tls_version": 0, 00:05:40.294 "enable_ktls": false 00:05:40.294 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "sock_impl_set_options", 00:05:40.295 "params": { 00:05:40.295 "impl_name": "ssl", 00:05:40.295 "recv_buf_size": 4096, 00:05:40.295 "send_buf_size": 4096, 00:05:40.295 "enable_recv_pipe": true, 00:05:40.295 "enable_quickack": false, 00:05:40.295 "enable_placement_id": 0, 00:05:40.295 "enable_zerocopy_send_server": true, 00:05:40.295 "enable_zerocopy_send_client": false, 00:05:40.295 "zerocopy_threshold": 0, 00:05:40.295 "tls_version": 0, 00:05:40.295 "enable_ktls": false 00:05:40.295 } 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "vmd", 00:05:40.295 "config": [] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "accel", 00:05:40.295 "config": [ 00:05:40.295 { 00:05:40.295 "method": "accel_set_options", 00:05:40.295 "params": { 00:05:40.295 "small_cache_size": 128, 00:05:40.295 "large_cache_size": 16, 00:05:40.295 "task_count": 2048, 00:05:40.295 "sequence_count": 2048, 00:05:40.295 "buf_count": 2048 00:05:40.295 } 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "bdev", 00:05:40.295 "config": [ 00:05:40.295 { 00:05:40.295 "method": "bdev_set_options", 00:05:40.295 "params": { 00:05:40.295 "bdev_io_pool_size": 65535, 00:05:40.295 "bdev_io_cache_size": 256, 00:05:40.295 "bdev_auto_examine": true, 00:05:40.295 "iobuf_small_cache_size": 128, 00:05:40.295 "iobuf_large_cache_size": 16 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "bdev_raid_set_options", 00:05:40.295 "params": { 00:05:40.295 "process_window_size_kb": 1024 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "bdev_iscsi_set_options", 00:05:40.295 "params": { 00:05:40.295 "timeout_sec": 30 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "bdev_nvme_set_options", 00:05:40.295 "params": { 00:05:40.295 "action_on_timeout": "none", 00:05:40.295 "timeout_us": 0, 00:05:40.295 "timeout_admin_us": 0, 00:05:40.295 "keep_alive_timeout_ms": 10000, 00:05:40.295 "arbitration_burst": 0, 00:05:40.295 "low_priority_weight": 0, 00:05:40.295 "medium_priority_weight": 0, 00:05:40.295 "high_priority_weight": 0, 00:05:40.295 "nvme_adminq_poll_period_us": 10000, 00:05:40.295 "nvme_ioq_poll_period_us": 0, 00:05:40.295 "io_queue_requests": 0, 00:05:40.295 "delay_cmd_submit": true, 00:05:40.295 "transport_retry_count": 4, 00:05:40.295 "bdev_retry_count": 3, 00:05:40.295 "transport_ack_timeout": 0, 00:05:40.295 "ctrlr_loss_timeout_sec": 0, 00:05:40.295 "reconnect_delay_sec": 0, 00:05:40.295 "fast_io_fail_timeout_sec": 0, 00:05:40.295 "disable_auto_failback": false, 00:05:40.295 "generate_uuids": false, 00:05:40.295 "transport_tos": 0, 00:05:40.295 "nvme_error_stat": false, 00:05:40.295 "rdma_srq_size": 0, 00:05:40.295 "io_path_stat": false, 00:05:40.295 "allow_accel_sequence": false, 00:05:40.295 "rdma_max_cq_size": 0, 00:05:40.295 "rdma_cm_event_timeout_ms": 0, 00:05:40.295 "dhchap_digests": [ 00:05:40.295 "sha256", 00:05:40.295 "sha384", 00:05:40.295 "sha512" 00:05:40.295 ], 00:05:40.295 "dhchap_dhgroups": [ 00:05:40.295 "null", 00:05:40.295 "ffdhe2048", 00:05:40.295 "ffdhe3072", 00:05:40.295 "ffdhe4096", 00:05:40.295 "ffdhe6144", 00:05:40.295 "ffdhe8192" 00:05:40.295 ] 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "bdev_nvme_set_hotplug", 00:05:40.295 "params": { 00:05:40.295 "period_us": 100000, 00:05:40.295 "enable": false 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "bdev_wait_for_examine" 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "scsi", 00:05:40.295 "config": null 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "scheduler", 00:05:40.295 "config": [ 00:05:40.295 { 00:05:40.295 "method": "framework_set_scheduler", 00:05:40.295 "params": { 00:05:40.295 "name": "static" 00:05:40.295 } 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "vhost_scsi", 00:05:40.295 "config": [] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "vhost_blk", 00:05:40.295 "config": [] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "ublk", 00:05:40.295 "config": [] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "nbd", 00:05:40.295 "config": [] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "nvmf", 00:05:40.295 "config": [ 00:05:40.295 { 00:05:40.295 "method": "nvmf_set_config", 00:05:40.295 "params": { 00:05:40.295 "discovery_filter": "match_any", 00:05:40.295 "admin_cmd_passthru": { 00:05:40.295 "identify_ctrlr": false 00:05:40.295 } 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "nvmf_set_max_subsystems", 00:05:40.295 "params": { 00:05:40.295 "max_subsystems": 1024 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "nvmf_set_crdt", 00:05:40.295 "params": { 00:05:40.295 "crdt1": 0, 00:05:40.295 "crdt2": 0, 00:05:40.295 "crdt3": 0 00:05:40.295 } 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "method": "nvmf_create_transport", 00:05:40.295 "params": { 00:05:40.295 "trtype": "TCP", 00:05:40.295 "max_queue_depth": 128, 00:05:40.295 "max_io_qpairs_per_ctrlr": 127, 00:05:40.295 "in_capsule_data_size": 4096, 00:05:40.295 "max_io_size": 131072, 00:05:40.295 "io_unit_size": 131072, 00:05:40.295 "max_aq_depth": 128, 00:05:40.295 "num_shared_buffers": 511, 00:05:40.295 "buf_cache_size": 4294967295, 00:05:40.295 "dif_insert_or_strip": false, 00:05:40.295 "zcopy": false, 00:05:40.295 "c2h_success": true, 00:05:40.295 "sock_priority": 0, 00:05:40.295 "abort_timeout_sec": 1, 00:05:40.295 "ack_timeout": 0, 00:05:40.295 "data_wr_pool_size": 0 00:05:40.295 } 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 }, 00:05:40.295 { 00:05:40.295 "subsystem": "iscsi", 00:05:40.295 "config": [ 00:05:40.295 { 00:05:40.295 "method": "iscsi_set_options", 00:05:40.295 "params": { 00:05:40.295 "node_base": "iqn.2016-06.io.spdk", 00:05:40.295 "max_sessions": 128, 00:05:40.295 "max_connections_per_session": 2, 00:05:40.295 "max_queue_depth": 64, 00:05:40.295 "default_time2wait": 2, 00:05:40.295 "default_time2retain": 20, 00:05:40.295 "first_burst_length": 8192, 00:05:40.295 "immediate_data": true, 00:05:40.295 "allow_duplicated_isid": false, 00:05:40.295 "error_recovery_level": 0, 00:05:40.295 "nop_timeout": 60, 00:05:40.295 "nop_in_interval": 30, 00:05:40.295 "disable_chap": false, 00:05:40.295 "require_chap": false, 00:05:40.295 "mutual_chap": false, 00:05:40.295 "chap_group": 0, 00:05:40.295 "max_large_datain_per_connection": 64, 00:05:40.295 "max_r2t_per_connection": 4, 00:05:40.295 "pdu_pool_size": 36864, 00:05:40.295 "immediate_data_pool_size": 16384, 00:05:40.295 "data_out_pool_size": 2048 00:05:40.295 } 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 } 00:05:40.295 ] 00:05:40.295 } 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1636177 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1636177 ']' 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1636177 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1636177 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1636177' 00:05:40.295 killing process with pid 1636177 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1636177 00:05:40.295 16:26:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1636177 00:05:40.860 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1636324 00:05:40.860 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.860 16:26:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1636324 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1636324 ']' 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1636324 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1636324 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1636324' 00:05:46.124 killing process with pid 1636324 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1636324 00:05:46.124 16:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1636324 00:05:46.124 16:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.124 16:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.124 00:05:46.124 real 0m6.505s 00:05:46.124 user 0m6.063s 00:05:46.124 sys 0m0.729s 00:05:46.124 16:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.124 16:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.124 ************************************ 00:05:46.124 END TEST skip_rpc_with_json 00:05:46.124 ************************************ 00:05:46.124 16:26:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.124 16:26:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.124 16:26:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.124 16:26:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.382 ************************************ 00:05:46.382 START TEST skip_rpc_with_delay 00:05:46.382 ************************************ 00:05:46.382 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.383 [2024-05-15 16:26:53.404664] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.383 [2024-05-15 16:26:53.404791] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.383 00:05:46.383 real 0m0.067s 00:05:46.383 user 0m0.041s 00:05:46.383 sys 0m0.025s 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.383 16:26:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.383 ************************************ 00:05:46.383 END TEST skip_rpc_with_delay 00:05:46.383 ************************************ 00:05:46.383 16:26:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.383 16:26:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.383 16:26:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.383 16:26:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.383 16:26:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.383 16:26:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.383 ************************************ 00:05:46.383 START TEST exit_on_failed_rpc_init 00:05:46.383 ************************************ 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1637055 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1637055 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1637055 ']' 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.383 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.383 [2024-05-15 16:26:53.520443] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:46.383 [2024-05-15 16:26:53.520543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637055 ] 00:05:46.383 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.383 [2024-05-15 16:26:53.585927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.641 [2024-05-15 16:26:53.671477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.900 16:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.900 [2024-05-15 16:26:53.971547] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:46.900 [2024-05-15 16:26:53.971642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637073 ] 00:05:46.900 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.900 [2024-05-15 16:26:54.042343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.162 [2024-05-15 16:26:54.135640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.162 [2024-05-15 16:26:54.135750] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.162 [2024-05-15 16:26:54.135772] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.162 [2024-05-15 16:26:54.135786] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1637055 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1637055 ']' 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1637055 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1637055 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1637055' 00:05:47.162 killing process with pid 1637055 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1637055 00:05:47.162 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1637055 00:05:47.728 00:05:47.728 real 0m1.179s 00:05:47.728 user 0m1.279s 00:05:47.728 sys 0m0.463s 00:05:47.728 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.728 16:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.728 ************************************ 00:05:47.728 END TEST exit_on_failed_rpc_init 00:05:47.728 ************************************ 00:05:47.728 16:26:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.728 00:05:47.728 real 0m13.469s 00:05:47.728 user 0m12.596s 00:05:47.728 sys 0m1.738s 00:05:47.728 16:26:54 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.728 16:26:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.728 ************************************ 00:05:47.728 END TEST skip_rpc 00:05:47.728 ************************************ 00:05:47.728 16:26:54 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.728 16:26:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.728 16:26:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.728 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.728 ************************************ 00:05:47.728 START TEST rpc_client 00:05:47.728 ************************************ 00:05:47.728 16:26:54 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.728 * Looking for test storage... 00:05:47.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:47.728 16:26:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:47.728 OK 00:05:47.728 16:26:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.728 00:05:47.728 real 0m0.067s 00:05:47.728 user 0m0.030s 00:05:47.728 sys 0m0.043s 00:05:47.728 16:26:54 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.728 16:26:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:47.728 ************************************ 00:05:47.728 END TEST rpc_client 00:05:47.728 ************************************ 00:05:47.728 16:26:54 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.728 16:26:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.728 16:26:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.728 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.728 ************************************ 00:05:47.728 START TEST json_config 00:05:47.728 ************************************ 00:05:47.728 16:26:54 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.728 16:26:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.728 16:26:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.728 16:26:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.728 16:26:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.728 16:26:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.728 16:26:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.728 16:26:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.728 16:26:54 json_config -- paths/export.sh@5 -- # export PATH 00:05:47.728 16:26:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@47 -- # : 0 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.728 16:26:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.729 16:26:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.729 16:26:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.729 16:26:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.729 16:26:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.729 16:26:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:47.729 INFO: JSON configuration test init 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.729 16:26:54 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.729 16:26:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:47.729 16:26:54 json_config -- json_config/common.sh@10 -- # shift 00:05:47.729 16:26:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.729 16:26:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.729 16:26:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.729 16:26:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.729 16:26:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.729 16:26:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1637309 00:05:47.729 16:26:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:47.729 16:26:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.729 Waiting for target to run... 00:05:47.729 16:26:54 json_config -- json_config/common.sh@25 -- # waitforlisten 1637309 /var/tmp/spdk_tgt.sock 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@827 -- # '[' -z 1637309 ']' 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.729 16:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.729 [2024-05-15 16:26:54.942910] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:47.729 [2024-05-15 16:26:54.943003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637309 ] 00:05:47.987 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.245 [2024-05-15 16:26:55.303579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.245 [2024-05-15 16:26:55.363345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.809 16:26:55 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.809 16:26:55 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:48.809 16:26:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:48.809 00:05:48.809 16:26:55 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:48.809 16:26:55 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:48.809 16:26:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:48.809 16:26:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.809 16:26:55 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:48.809 16:26:55 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:48.809 16:26:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.810 16:26:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.810 16:26:55 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:48.810 16:26:55 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:48.810 16:26:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:52.121 16:26:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:52.121 16:26:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:52.121 16:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:52.121 16:26:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.121 16:26:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:52.121 16:26:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:52.121 16:26:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:52.121 16:26:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.121 16:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.378 MallocForNvmf0 00:05:52.378 16:26:59 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.378 16:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.635 MallocForNvmf1 00:05:52.635 16:26:59 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.635 16:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.893 [2024-05-15 16:27:00.060878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.893 16:27:00 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.893 16:27:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.151 16:27:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.151 16:27:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.408 16:27:00 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.408 16:27:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.665 16:27:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.665 16:27:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.923 [2024-05-15 16:27:01.079769] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:53.923 [2024-05-15 16:27:01.080389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.923 16:27:01 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:53.923 16:27:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.923 16:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 16:27:01 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:53.923 16:27:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.923 16:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 16:27:01 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:53.923 16:27:01 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.923 16:27:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.180 MallocBdevForConfigChangeCheck 00:05:54.180 16:27:01 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:54.180 16:27:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.180 16:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.438 16:27:01 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:54.438 16:27:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.696 16:27:01 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:54.696 INFO: shutting down applications... 00:05:54.696 16:27:01 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:54.696 16:27:01 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:54.696 16:27:01 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:54.696 16:27:01 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:56.595 Calling clear_iscsi_subsystem 00:05:56.595 Calling clear_nvmf_subsystem 00:05:56.595 Calling clear_nbd_subsystem 00:05:56.595 Calling clear_ublk_subsystem 00:05:56.595 Calling clear_vhost_blk_subsystem 00:05:56.595 Calling clear_vhost_scsi_subsystem 00:05:56.595 Calling clear_bdev_subsystem 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@345 -- # break 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:56.595 16:27:03 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:56.595 16:27:03 json_config -- json_config/common.sh@31 -- # local app=target 00:05:56.595 16:27:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.595 16:27:03 json_config -- json_config/common.sh@35 -- # [[ -n 1637309 ]] 00:05:56.595 16:27:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1637309 00:05:56.595 [2024-05-15 16:27:03.736191] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:56.595 16:27:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.595 16:27:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.595 16:27:03 json_config -- json_config/common.sh@41 -- # kill -0 1637309 00:05:56.595 16:27:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.162 16:27:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.162 16:27:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.162 16:27:04 json_config -- json_config/common.sh@41 -- # kill -0 1637309 00:05:57.162 16:27:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.162 16:27:04 json_config -- json_config/common.sh@43 -- # break 00:05:57.162 16:27:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.162 16:27:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.162 SPDK target shutdown done 00:05:57.162 16:27:04 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:57.162 INFO: relaunching applications... 00:05:57.162 16:27:04 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.162 16:27:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:57.162 16:27:04 json_config -- json_config/common.sh@10 -- # shift 00:05:57.162 16:27:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.162 16:27:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.162 16:27:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.162 16:27:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.162 16:27:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.162 16:27:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1638615 00:05:57.162 16:27:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.162 16:27:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.162 Waiting for target to run... 00:05:57.162 16:27:04 json_config -- json_config/common.sh@25 -- # waitforlisten 1638615 /var/tmp/spdk_tgt.sock 00:05:57.162 16:27:04 json_config -- common/autotest_common.sh@827 -- # '[' -z 1638615 ']' 00:05:57.162 16:27:04 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.162 16:27:04 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.162 16:27:04 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.162 16:27:04 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.162 16:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.162 [2024-05-15 16:27:04.293487] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:05:57.162 [2024-05-15 16:27:04.293595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638615 ] 00:05:57.162 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.729 [2024-05-15 16:27:04.834566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.729 [2024-05-15 16:27:04.917339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.071 [2024-05-15 16:27:07.950129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.071 [2024-05-15 16:27:07.982088] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:01.071 [2024-05-15 16:27:07.982716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:01.647 16:27:08 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.647 16:27:08 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:01.647 16:27:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:01.647 00:06:01.647 16:27:08 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:01.647 16:27:08 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:01.647 INFO: Checking if target configuration is the same... 00:06:01.647 16:27:08 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.647 16:27:08 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:01.647 16:27:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.647 + '[' 2 -ne 2 ']' 00:06:01.647 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:01.647 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:01.647 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:01.647 +++ basename /dev/fd/62 00:06:01.647 ++ mktemp /tmp/62.XXX 00:06:01.647 + tmp_file_1=/tmp/62.abz 00:06:01.647 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.647 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:01.647 + tmp_file_2=/tmp/spdk_tgt_config.json.H9e 00:06:01.647 + ret=0 00:06:01.647 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:02.214 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:02.214 + diff -u /tmp/62.abz /tmp/spdk_tgt_config.json.H9e 00:06:02.214 + echo 'INFO: JSON config files are the same' 00:06:02.214 INFO: JSON config files are the same 00:06:02.214 + rm /tmp/62.abz /tmp/spdk_tgt_config.json.H9e 00:06:02.214 + exit 0 00:06:02.214 16:27:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:02.214 16:27:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:02.214 INFO: changing configuration and checking if this can be detected... 00:06:02.214 16:27:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:02.214 16:27:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:02.472 16:27:09 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.472 16:27:09 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:02.472 16:27:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.472 + '[' 2 -ne 2 ']' 00:06:02.472 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:02.472 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:02.472 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:02.472 +++ basename /dev/fd/62 00:06:02.472 ++ mktemp /tmp/62.XXX 00:06:02.472 + tmp_file_1=/tmp/62.3BO 00:06:02.472 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.472 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:02.472 + tmp_file_2=/tmp/spdk_tgt_config.json.NZP 00:06:02.472 + ret=0 00:06:02.472 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:02.730 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:02.730 + diff -u /tmp/62.3BO /tmp/spdk_tgt_config.json.NZP 00:06:02.730 + ret=1 00:06:02.730 + echo '=== Start of file: /tmp/62.3BO ===' 00:06:02.730 + cat /tmp/62.3BO 00:06:02.730 + echo '=== End of file: /tmp/62.3BO ===' 00:06:02.730 + echo '' 00:06:02.730 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NZP ===' 00:06:02.730 + cat /tmp/spdk_tgt_config.json.NZP 00:06:02.730 + echo '=== End of file: /tmp/spdk_tgt_config.json.NZP ===' 00:06:02.730 + echo '' 00:06:02.730 + rm /tmp/62.3BO /tmp/spdk_tgt_config.json.NZP 00:06:02.730 + exit 1 00:06:02.730 16:27:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:02.730 INFO: configuration change detected. 00:06:02.730 16:27:09 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@317 -- # [[ -n 1638615 ]] 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 16:27:09 json_config -- json_config/json_config.sh@323 -- # killprocess 1638615 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@946 -- # '[' -z 1638615 ']' 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@950 -- # kill -0 1638615 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@951 -- # uname 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1638615 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1638615' 00:06:02.731 killing process with pid 1638615 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@965 -- # kill 1638615 00:06:02.731 [2024-05-15 16:27:09.928289] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:02.731 16:27:09 json_config -- common/autotest_common.sh@970 -- # wait 1638615 00:06:04.630 16:27:11 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.630 16:27:11 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:04.630 16:27:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.630 16:27:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.630 16:27:11 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:04.630 16:27:11 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:04.630 INFO: Success 00:06:04.630 00:06:04.630 real 0m16.640s 00:06:04.630 user 0m18.606s 00:06:04.630 sys 0m2.089s 00:06:04.630 16:27:11 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.630 16:27:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.630 ************************************ 00:06:04.630 END TEST json_config 00:06:04.630 ************************************ 00:06:04.630 16:27:11 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:04.630 16:27:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.630 16:27:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.630 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.630 ************************************ 00:06:04.630 START TEST json_config_extra_key 00:06:04.630 ************************************ 00:06:04.630 16:27:11 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:04.630 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.630 16:27:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.630 16:27:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.630 16:27:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.630 16:27:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.630 16:27:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.630 16:27:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.630 16:27:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:04.630 16:27:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.630 16:27:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.630 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:04.630 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:04.631 INFO: launching applications... 00:06:04.631 16:27:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1640160 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.631 Waiting for target to run... 00:06:04.631 16:27:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1640160 /var/tmp/spdk_tgt.sock 00:06:04.631 16:27:11 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1640160 ']' 00:06:04.631 16:27:11 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.631 16:27:11 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.631 16:27:11 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.631 16:27:11 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.631 16:27:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.631 [2024-05-15 16:27:11.638154] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:04.631 [2024-05-15 16:27:11.638259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640160 ] 00:06:04.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.889 [2024-05-15 16:27:11.980961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.889 [2024-05-15 16:27:12.040727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.454 16:27:12 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.454 16:27:12 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:05.454 00:06:05.454 16:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:05.454 INFO: shutting down applications... 00:06:05.454 16:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1640160 ]] 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1640160 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1640160 00:06:05.454 16:27:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1640160 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.017 16:27:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.017 SPDK target shutdown done 00:06:06.017 16:27:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:06.017 Success 00:06:06.017 00:06:06.017 real 0m1.534s 00:06:06.017 user 0m1.506s 00:06:06.017 sys 0m0.429s 00:06:06.017 16:27:13 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.017 16:27:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.017 ************************************ 00:06:06.017 END TEST json_config_extra_key 00:06:06.017 ************************************ 00:06:06.017 16:27:13 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.017 16:27:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.017 16:27:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.017 16:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.017 ************************************ 00:06:06.017 START TEST alias_rpc 00:06:06.017 ************************************ 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.017 * Looking for test storage... 00:06:06.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:06.017 16:27:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.017 16:27:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1640347 00:06:06.017 16:27:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.017 16:27:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1640347 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1640347 ']' 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.017 16:27:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.017 [2024-05-15 16:27:13.230882] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:06.017 [2024-05-15 16:27:13.230966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640347 ] 00:06:06.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.274 [2024-05-15 16:27:13.298520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.274 [2024-05-15 16:27:13.383374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.531 16:27:13 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.531 16:27:13 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:06.531 16:27:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:06.788 16:27:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1640347 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1640347 ']' 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1640347 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1640347 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1640347' 00:06:06.788 killing process with pid 1640347 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@965 -- # kill 1640347 00:06:06.788 16:27:13 alias_rpc -- common/autotest_common.sh@970 -- # wait 1640347 00:06:07.353 00:06:07.353 real 0m1.212s 00:06:07.353 user 0m1.277s 00:06:07.353 sys 0m0.427s 00:06:07.353 16:27:14 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.353 16:27:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.353 ************************************ 00:06:07.353 END TEST alias_rpc 00:06:07.353 ************************************ 00:06:07.353 16:27:14 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:07.353 16:27:14 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:07.353 16:27:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.353 16:27:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.353 16:27:14 -- common/autotest_common.sh@10 -- # set +x 00:06:07.353 ************************************ 00:06:07.353 START TEST spdkcli_tcp 00:06:07.353 ************************************ 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:07.353 * Looking for test storage... 00:06:07.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1640544 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:07.353 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1640544 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1640544 ']' 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.353 16:27:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.353 [2024-05-15 16:27:14.496278] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:07.353 [2024-05-15 16:27:14.496368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640544 ] 00:06:07.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.353 [2024-05-15 16:27:14.566347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.611 [2024-05-15 16:27:14.649963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.611 [2024-05-15 16:27:14.649968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.869 16:27:14 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.869 16:27:14 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:07.869 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1640664 00:06:07.869 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.869 16:27:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:08.127 [ 00:06:08.127 "bdev_malloc_delete", 00:06:08.127 "bdev_malloc_create", 00:06:08.127 "bdev_null_resize", 00:06:08.127 "bdev_null_delete", 00:06:08.127 "bdev_null_create", 00:06:08.127 "bdev_nvme_cuse_unregister", 00:06:08.127 "bdev_nvme_cuse_register", 00:06:08.127 "bdev_opal_new_user", 00:06:08.127 "bdev_opal_set_lock_state", 00:06:08.127 "bdev_opal_delete", 00:06:08.127 "bdev_opal_get_info", 00:06:08.127 "bdev_opal_create", 00:06:08.127 "bdev_nvme_opal_revert", 00:06:08.127 "bdev_nvme_opal_init", 00:06:08.127 "bdev_nvme_send_cmd", 00:06:08.127 "bdev_nvme_get_path_iostat", 00:06:08.127 "bdev_nvme_get_mdns_discovery_info", 00:06:08.127 "bdev_nvme_stop_mdns_discovery", 00:06:08.127 "bdev_nvme_start_mdns_discovery", 00:06:08.127 "bdev_nvme_set_multipath_policy", 00:06:08.127 "bdev_nvme_set_preferred_path", 00:06:08.127 "bdev_nvme_get_io_paths", 00:06:08.127 "bdev_nvme_remove_error_injection", 00:06:08.127 "bdev_nvme_add_error_injection", 00:06:08.127 "bdev_nvme_get_discovery_info", 00:06:08.127 "bdev_nvme_stop_discovery", 00:06:08.127 "bdev_nvme_start_discovery", 00:06:08.127 "bdev_nvme_get_controller_health_info", 00:06:08.127 "bdev_nvme_disable_controller", 00:06:08.127 "bdev_nvme_enable_controller", 00:06:08.127 "bdev_nvme_reset_controller", 00:06:08.127 "bdev_nvme_get_transport_statistics", 00:06:08.127 "bdev_nvme_apply_firmware", 00:06:08.127 "bdev_nvme_detach_controller", 00:06:08.127 "bdev_nvme_get_controllers", 00:06:08.127 "bdev_nvme_attach_controller", 00:06:08.127 "bdev_nvme_set_hotplug", 00:06:08.127 "bdev_nvme_set_options", 00:06:08.127 "bdev_passthru_delete", 00:06:08.127 "bdev_passthru_create", 00:06:08.127 "bdev_lvol_check_shallow_copy", 00:06:08.127 "bdev_lvol_start_shallow_copy", 00:06:08.127 "bdev_lvol_grow_lvstore", 00:06:08.127 "bdev_lvol_get_lvols", 00:06:08.127 "bdev_lvol_get_lvstores", 00:06:08.127 "bdev_lvol_delete", 00:06:08.127 "bdev_lvol_set_read_only", 00:06:08.127 "bdev_lvol_resize", 00:06:08.127 "bdev_lvol_decouple_parent", 00:06:08.127 "bdev_lvol_inflate", 00:06:08.127 "bdev_lvol_rename", 00:06:08.127 "bdev_lvol_clone_bdev", 00:06:08.127 "bdev_lvol_clone", 00:06:08.127 "bdev_lvol_snapshot", 00:06:08.127 "bdev_lvol_create", 00:06:08.127 "bdev_lvol_delete_lvstore", 00:06:08.127 "bdev_lvol_rename_lvstore", 00:06:08.127 "bdev_lvol_create_lvstore", 00:06:08.127 "bdev_raid_set_options", 00:06:08.127 "bdev_raid_remove_base_bdev", 00:06:08.127 "bdev_raid_add_base_bdev", 00:06:08.127 "bdev_raid_delete", 00:06:08.127 "bdev_raid_create", 00:06:08.127 "bdev_raid_get_bdevs", 00:06:08.127 "bdev_error_inject_error", 00:06:08.127 "bdev_error_delete", 00:06:08.127 "bdev_error_create", 00:06:08.127 "bdev_split_delete", 00:06:08.127 "bdev_split_create", 00:06:08.127 "bdev_delay_delete", 00:06:08.127 "bdev_delay_create", 00:06:08.127 "bdev_delay_update_latency", 00:06:08.127 "bdev_zone_block_delete", 00:06:08.127 "bdev_zone_block_create", 00:06:08.127 "blobfs_create", 00:06:08.127 "blobfs_detect", 00:06:08.127 "blobfs_set_cache_size", 00:06:08.127 "bdev_aio_delete", 00:06:08.127 "bdev_aio_rescan", 00:06:08.127 "bdev_aio_create", 00:06:08.127 "bdev_ftl_set_property", 00:06:08.127 "bdev_ftl_get_properties", 00:06:08.127 "bdev_ftl_get_stats", 00:06:08.127 "bdev_ftl_unmap", 00:06:08.127 "bdev_ftl_unload", 00:06:08.127 "bdev_ftl_delete", 00:06:08.127 "bdev_ftl_load", 00:06:08.127 "bdev_ftl_create", 00:06:08.127 "bdev_virtio_attach_controller", 00:06:08.127 "bdev_virtio_scsi_get_devices", 00:06:08.127 "bdev_virtio_detach_controller", 00:06:08.127 "bdev_virtio_blk_set_hotplug", 00:06:08.127 "bdev_iscsi_delete", 00:06:08.127 "bdev_iscsi_create", 00:06:08.127 "bdev_iscsi_set_options", 00:06:08.127 "accel_error_inject_error", 00:06:08.127 "ioat_scan_accel_module", 00:06:08.127 "dsa_scan_accel_module", 00:06:08.127 "iaa_scan_accel_module", 00:06:08.127 "vfu_virtio_create_scsi_endpoint", 00:06:08.127 "vfu_virtio_scsi_remove_target", 00:06:08.127 "vfu_virtio_scsi_add_target", 00:06:08.127 "vfu_virtio_create_blk_endpoint", 00:06:08.127 "vfu_virtio_delete_endpoint", 00:06:08.127 "keyring_file_remove_key", 00:06:08.127 "keyring_file_add_key", 00:06:08.127 "iscsi_get_histogram", 00:06:08.127 "iscsi_enable_histogram", 00:06:08.127 "iscsi_set_options", 00:06:08.127 "iscsi_get_auth_groups", 00:06:08.127 "iscsi_auth_group_remove_secret", 00:06:08.127 "iscsi_auth_group_add_secret", 00:06:08.127 "iscsi_delete_auth_group", 00:06:08.127 "iscsi_create_auth_group", 00:06:08.128 "iscsi_set_discovery_auth", 00:06:08.128 "iscsi_get_options", 00:06:08.128 "iscsi_target_node_request_logout", 00:06:08.128 "iscsi_target_node_set_redirect", 00:06:08.128 "iscsi_target_node_set_auth", 00:06:08.128 "iscsi_target_node_add_lun", 00:06:08.128 "iscsi_get_stats", 00:06:08.128 "iscsi_get_connections", 00:06:08.128 "iscsi_portal_group_set_auth", 00:06:08.128 "iscsi_start_portal_group", 00:06:08.128 "iscsi_delete_portal_group", 00:06:08.128 "iscsi_create_portal_group", 00:06:08.128 "iscsi_get_portal_groups", 00:06:08.128 "iscsi_delete_target_node", 00:06:08.128 "iscsi_target_node_remove_pg_ig_maps", 00:06:08.128 "iscsi_target_node_add_pg_ig_maps", 00:06:08.128 "iscsi_create_target_node", 00:06:08.128 "iscsi_get_target_nodes", 00:06:08.128 "iscsi_delete_initiator_group", 00:06:08.128 "iscsi_initiator_group_remove_initiators", 00:06:08.128 "iscsi_initiator_group_add_initiators", 00:06:08.128 "iscsi_create_initiator_group", 00:06:08.128 "iscsi_get_initiator_groups", 00:06:08.128 "nvmf_set_crdt", 00:06:08.128 "nvmf_set_config", 00:06:08.128 "nvmf_set_max_subsystems", 00:06:08.128 "nvmf_stop_mdns_prr", 00:06:08.128 "nvmf_publish_mdns_prr", 00:06:08.128 "nvmf_subsystem_get_listeners", 00:06:08.128 "nvmf_subsystem_get_qpairs", 00:06:08.128 "nvmf_subsystem_get_controllers", 00:06:08.128 "nvmf_get_stats", 00:06:08.128 "nvmf_get_transports", 00:06:08.128 "nvmf_create_transport", 00:06:08.128 "nvmf_get_targets", 00:06:08.128 "nvmf_delete_target", 00:06:08.128 "nvmf_create_target", 00:06:08.128 "nvmf_subsystem_allow_any_host", 00:06:08.128 "nvmf_subsystem_remove_host", 00:06:08.128 "nvmf_subsystem_add_host", 00:06:08.128 "nvmf_ns_remove_host", 00:06:08.128 "nvmf_ns_add_host", 00:06:08.128 "nvmf_subsystem_remove_ns", 00:06:08.128 "nvmf_subsystem_add_ns", 00:06:08.128 "nvmf_subsystem_listener_set_ana_state", 00:06:08.128 "nvmf_discovery_get_referrals", 00:06:08.128 "nvmf_discovery_remove_referral", 00:06:08.128 "nvmf_discovery_add_referral", 00:06:08.128 "nvmf_subsystem_remove_listener", 00:06:08.128 "nvmf_subsystem_add_listener", 00:06:08.128 "nvmf_delete_subsystem", 00:06:08.128 "nvmf_create_subsystem", 00:06:08.128 "nvmf_get_subsystems", 00:06:08.128 "env_dpdk_get_mem_stats", 00:06:08.128 "nbd_get_disks", 00:06:08.128 "nbd_stop_disk", 00:06:08.128 "nbd_start_disk", 00:06:08.128 "ublk_recover_disk", 00:06:08.128 "ublk_get_disks", 00:06:08.128 "ublk_stop_disk", 00:06:08.128 "ublk_start_disk", 00:06:08.128 "ublk_destroy_target", 00:06:08.128 "ublk_create_target", 00:06:08.128 "virtio_blk_create_transport", 00:06:08.128 "virtio_blk_get_transports", 00:06:08.128 "vhost_controller_set_coalescing", 00:06:08.128 "vhost_get_controllers", 00:06:08.128 "vhost_delete_controller", 00:06:08.128 "vhost_create_blk_controller", 00:06:08.128 "vhost_scsi_controller_remove_target", 00:06:08.128 "vhost_scsi_controller_add_target", 00:06:08.128 "vhost_start_scsi_controller", 00:06:08.128 "vhost_create_scsi_controller", 00:06:08.128 "thread_set_cpumask", 00:06:08.128 "framework_get_scheduler", 00:06:08.128 "framework_set_scheduler", 00:06:08.128 "framework_get_reactors", 00:06:08.128 "thread_get_io_channels", 00:06:08.128 "thread_get_pollers", 00:06:08.128 "thread_get_stats", 00:06:08.128 "framework_monitor_context_switch", 00:06:08.128 "spdk_kill_instance", 00:06:08.128 "log_enable_timestamps", 00:06:08.128 "log_get_flags", 00:06:08.128 "log_clear_flag", 00:06:08.128 "log_set_flag", 00:06:08.128 "log_get_level", 00:06:08.128 "log_set_level", 00:06:08.128 "log_get_print_level", 00:06:08.128 "log_set_print_level", 00:06:08.128 "framework_enable_cpumask_locks", 00:06:08.128 "framework_disable_cpumask_locks", 00:06:08.128 "framework_wait_init", 00:06:08.128 "framework_start_init", 00:06:08.128 "scsi_get_devices", 00:06:08.128 "bdev_get_histogram", 00:06:08.128 "bdev_enable_histogram", 00:06:08.128 "bdev_set_qos_limit", 00:06:08.128 "bdev_set_qd_sampling_period", 00:06:08.128 "bdev_get_bdevs", 00:06:08.128 "bdev_reset_iostat", 00:06:08.128 "bdev_get_iostat", 00:06:08.128 "bdev_examine", 00:06:08.128 "bdev_wait_for_examine", 00:06:08.128 "bdev_set_options", 00:06:08.128 "notify_get_notifications", 00:06:08.128 "notify_get_types", 00:06:08.128 "accel_get_stats", 00:06:08.128 "accel_set_options", 00:06:08.128 "accel_set_driver", 00:06:08.128 "accel_crypto_key_destroy", 00:06:08.128 "accel_crypto_keys_get", 00:06:08.128 "accel_crypto_key_create", 00:06:08.128 "accel_assign_opc", 00:06:08.128 "accel_get_module_info", 00:06:08.128 "accel_get_opc_assignments", 00:06:08.128 "vmd_rescan", 00:06:08.128 "vmd_remove_device", 00:06:08.128 "vmd_enable", 00:06:08.128 "sock_get_default_impl", 00:06:08.128 "sock_set_default_impl", 00:06:08.128 "sock_impl_set_options", 00:06:08.128 "sock_impl_get_options", 00:06:08.128 "iobuf_get_stats", 00:06:08.128 "iobuf_set_options", 00:06:08.128 "keyring_get_keys", 00:06:08.128 "framework_get_pci_devices", 00:06:08.128 "framework_get_config", 00:06:08.128 "framework_get_subsystems", 00:06:08.128 "vfu_tgt_set_base_path", 00:06:08.128 "trace_get_info", 00:06:08.128 "trace_get_tpoint_group_mask", 00:06:08.128 "trace_disable_tpoint_group", 00:06:08.128 "trace_enable_tpoint_group", 00:06:08.128 "trace_clear_tpoint_mask", 00:06:08.128 "trace_set_tpoint_mask", 00:06:08.128 "spdk_get_version", 00:06:08.128 "rpc_get_methods" 00:06:08.128 ] 00:06:08.128 16:27:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.128 16:27:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:08.128 16:27:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1640544 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1640544 ']' 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1640544 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1640544 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1640544' 00:06:08.128 killing process with pid 1640544 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1640544 00:06:08.128 16:27:15 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1640544 00:06:08.386 00:06:08.386 real 0m1.207s 00:06:08.386 user 0m2.118s 00:06:08.386 sys 0m0.463s 00:06:08.386 16:27:15 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.386 16:27:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.386 ************************************ 00:06:08.386 END TEST spdkcli_tcp 00:06:08.386 ************************************ 00:06:08.644 16:27:15 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.644 16:27:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.644 16:27:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.644 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.644 ************************************ 00:06:08.644 START TEST dpdk_mem_utility 00:06:08.644 ************************************ 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.644 * Looking for test storage... 00:06:08.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:08.644 16:27:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:08.644 16:27:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1640860 00:06:08.644 16:27:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.644 16:27:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1640860 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1640860 ']' 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.644 16:27:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.644 [2024-05-15 16:27:15.750539] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:08.644 [2024-05-15 16:27:15.750632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640860 ] 00:06:08.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.644 [2024-05-15 16:27:15.816169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.902 [2024-05-15 16:27:15.896837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.187 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.187 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:09.187 16:27:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.187 16:27:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.187 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.187 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.187 { 00:06:09.187 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.187 } 00:06:09.187 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.187 16:27:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:09.187 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:09.187 1 heaps totaling size 814.000000 MiB 00:06:09.187 size: 814.000000 MiB heap id: 0 00:06:09.187 end heaps---------- 00:06:09.187 8 mempools totaling size 598.116089 MiB 00:06:09.187 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.187 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.187 size: 84.521057 MiB name: bdev_io_1640860 00:06:09.187 size: 51.011292 MiB name: evtpool_1640860 00:06:09.187 size: 50.003479 MiB name: msgpool_1640860 00:06:09.187 size: 21.763794 MiB name: PDU_Pool 00:06:09.187 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.187 size: 0.026123 MiB name: Session_Pool 00:06:09.187 end mempools------- 00:06:09.187 6 memzones totaling size 4.142822 MiB 00:06:09.187 size: 1.000366 MiB name: RG_ring_0_1640860 00:06:09.187 size: 1.000366 MiB name: RG_ring_1_1640860 00:06:09.187 size: 1.000366 MiB name: RG_ring_4_1640860 00:06:09.187 size: 1.000366 MiB name: RG_ring_5_1640860 00:06:09.187 size: 0.125366 MiB name: RG_ring_2_1640860 00:06:09.187 size: 0.015991 MiB name: RG_ring_3_1640860 00:06:09.187 end memzones------- 00:06:09.187 16:27:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.187 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:09.187 list of free elements. size: 12.519348 MiB 00:06:09.187 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:09.187 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:09.187 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:09.187 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:09.187 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:09.187 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:09.187 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:09.187 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:09.187 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:09.187 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:09.187 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:09.187 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:09.187 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:09.187 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:09.187 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:09.187 list of standard malloc elements. size: 199.218079 MiB 00:06:09.187 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:09.187 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:09.187 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:09.187 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:09.187 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:09.187 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:09.187 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:09.187 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.187 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:09.187 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:09.187 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:09.187 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:09.187 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:09.187 list of memzone associated elements. size: 602.262573 MiB 00:06:09.187 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:09.187 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.187 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:09.187 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.187 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:09.187 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1640860_0 00:06:09.187 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:09.187 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1640860_0 00:06:09.187 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:09.187 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1640860_0 00:06:09.187 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:09.187 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.187 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:09.187 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.187 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:09.187 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1640860 00:06:09.187 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:09.187 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1640860 00:06:09.187 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:09.187 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1640860 00:06:09.187 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:09.187 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.187 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:09.187 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.187 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:09.187 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.187 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:09.187 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.187 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:09.187 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1640860 00:06:09.187 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:09.187 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1640860 00:06:09.187 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:09.187 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1640860 00:06:09.187 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:09.187 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1640860 00:06:09.187 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:09.187 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1640860 00:06:09.187 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:09.187 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.187 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:09.187 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.187 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:09.187 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.187 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:09.187 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1640860 00:06:09.187 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:09.187 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.187 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:09.187 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.187 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:09.187 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1640860 00:06:09.187 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:09.187 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.188 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:09.188 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1640860 00:06:09.188 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:09.188 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1640860 00:06:09.188 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:09.188 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.188 16:27:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.188 16:27:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1640860 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1640860 ']' 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1640860 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1640860 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1640860' 00:06:09.188 killing process with pid 1640860 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1640860 00:06:09.188 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1640860 00:06:09.753 00:06:09.753 real 0m1.069s 00:06:09.753 user 0m1.011s 00:06:09.753 sys 0m0.420s 00:06:09.753 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.753 16:27:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.753 ************************************ 00:06:09.753 END TEST dpdk_mem_utility 00:06:09.753 ************************************ 00:06:09.753 16:27:16 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:09.753 16:27:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.753 16:27:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.753 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.753 ************************************ 00:06:09.753 START TEST event 00:06:09.753 ************************************ 00:06:09.753 16:27:16 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:09.753 * Looking for test storage... 00:06:09.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.753 16:27:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:09.753 16:27:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.753 16:27:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.753 16:27:16 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:09.753 16:27:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.753 16:27:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.753 ************************************ 00:06:09.753 START TEST event_perf 00:06:09.753 ************************************ 00:06:09.753 16:27:16 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.753 Running I/O for 1 seconds...[2024-05-15 16:27:16.860977] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:09.753 [2024-05-15 16:27:16.861043] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641048 ] 00:06:09.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.753 [2024-05-15 16:27:16.928901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.039 [2024-05-15 16:27:17.017009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.039 [2024-05-15 16:27:17.017067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.039 [2024-05-15 16:27:17.017131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.039 [2024-05-15 16:27:17.017134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.974 Running I/O for 1 seconds... 00:06:10.974 lcore 0: 234119 00:06:10.974 lcore 1: 234118 00:06:10.974 lcore 2: 234117 00:06:10.974 lcore 3: 234117 00:06:10.974 done. 00:06:10.974 00:06:10.974 real 0m1.254s 00:06:10.974 user 0m4.159s 00:06:10.974 sys 0m0.090s 00:06:10.974 16:27:18 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.974 16:27:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.974 ************************************ 00:06:10.974 END TEST event_perf 00:06:10.974 ************************************ 00:06:10.974 16:27:18 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:10.974 16:27:18 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:10.974 16:27:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.974 16:27:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.974 ************************************ 00:06:10.974 START TEST event_reactor 00:06:10.974 ************************************ 00:06:10.974 16:27:18 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:10.974 [2024-05-15 16:27:18.170346] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:10.974 [2024-05-15 16:27:18.170412] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641205 ] 00:06:11.232 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.232 [2024-05-15 16:27:18.244014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.232 [2024-05-15 16:27:18.332446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.604 test_start 00:06:12.604 oneshot 00:06:12.604 tick 100 00:06:12.604 tick 100 00:06:12.604 tick 250 00:06:12.604 tick 100 00:06:12.604 tick 100 00:06:12.604 tick 100 00:06:12.604 tick 250 00:06:12.604 tick 500 00:06:12.604 tick 100 00:06:12.604 tick 100 00:06:12.604 tick 250 00:06:12.604 tick 100 00:06:12.604 tick 100 00:06:12.604 test_end 00:06:12.604 00:06:12.604 real 0m1.257s 00:06:12.604 user 0m1.157s 00:06:12.604 sys 0m0.096s 00:06:12.604 16:27:19 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.604 16:27:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.604 ************************************ 00:06:12.604 END TEST event_reactor 00:06:12.604 ************************************ 00:06:12.604 16:27:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.604 16:27:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:12.604 16:27:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.604 16:27:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.604 ************************************ 00:06:12.604 START TEST event_reactor_perf 00:06:12.604 ************************************ 00:06:12.604 16:27:19 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.604 [2024-05-15 16:27:19.484941] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:12.604 [2024-05-15 16:27:19.485009] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641363 ] 00:06:12.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.604 [2024-05-15 16:27:19.558604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.604 [2024-05-15 16:27:19.646171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.537 test_start 00:06:13.537 test_end 00:06:13.537 Performance: 351058 events per second 00:06:13.537 00:06:13.537 real 0m1.256s 00:06:13.537 user 0m1.159s 00:06:13.537 sys 0m0.092s 00:06:13.537 16:27:20 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.537 16:27:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.537 ************************************ 00:06:13.537 END TEST event_reactor_perf 00:06:13.537 ************************************ 00:06:13.537 16:27:20 event -- event/event.sh@49 -- # uname -s 00:06:13.537 16:27:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:13.537 16:27:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:13.537 16:27:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.537 16:27:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.537 16:27:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.794 ************************************ 00:06:13.794 START TEST event_scheduler 00:06:13.794 ************************************ 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:13.794 * Looking for test storage... 00:06:13.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:13.794 16:27:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:13.794 16:27:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1641543 00:06:13.794 16:27:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:13.794 16:27:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.794 16:27:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1641543 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1641543 ']' 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.794 16:27:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.794 [2024-05-15 16:27:20.873040] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:13.794 [2024-05-15 16:27:20.873114] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641543 ] 00:06:13.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.794 [2024-05-15 16:27:20.938555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.053 [2024-05-15 16:27:21.023126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.053 [2024-05-15 16:27:21.023166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.053 [2024-05-15 16:27:21.023230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.053 [2024-05-15 16:27:21.023235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:14.053 16:27:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 POWER: Env isn't set yet! 00:06:14.053 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:14.053 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:14.053 POWER: Cannot get available frequencies of lcore 0 00:06:14.053 POWER: Attempting to initialise PSTAT power management... 00:06:14.053 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:14.053 POWER: Initialized successfully for lcore 0 power management 00:06:14.053 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:14.053 POWER: Initialized successfully for lcore 1 power management 00:06:14.053 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:14.053 POWER: Initialized successfully for lcore 2 power management 00:06:14.053 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:14.053 POWER: Initialized successfully for lcore 3 power management 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.053 16:27:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 [2024-05-15 16:27:21.212207] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.053 16:27:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 ************************************ 00:06:14.053 START TEST scheduler_create_thread 00:06:14.053 ************************************ 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 2 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 3 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 4 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.053 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.311 5 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.311 6 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.311 7 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.311 8 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.311 9 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.311 10 00:06:14.311 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.312 16:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.263 16:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.263 16:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.263 16:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.263 16:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.652 16:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.652 16:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:16.652 16:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:16.652 16:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.652 16:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.585 16:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.585 00:06:17.586 real 0m3.379s 00:06:17.586 user 0m0.010s 00:06:17.586 sys 0m0.005s 00:06:17.586 16:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.586 16:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.586 ************************************ 00:06:17.586 END TEST scheduler_create_thread 00:06:17.586 ************************************ 00:06:17.586 16:27:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:17.586 16:27:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1641543 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1641543 ']' 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1641543 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1641543 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1641543' 00:06:17.586 killing process with pid 1641543 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1641543 00:06:17.586 16:27:24 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1641543 00:06:17.844 [2024-05-15 16:27:25.001174] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.102 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:18.102 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:18.102 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:18.102 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:18.102 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:18.102 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:18.102 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:18.102 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:18.102 00:06:18.102 real 0m4.492s 00:06:18.102 user 0m7.995s 00:06:18.102 sys 0m0.343s 00:06:18.102 16:27:25 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.102 16:27:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.102 ************************************ 00:06:18.102 END TEST event_scheduler 00:06:18.102 ************************************ 00:06:18.102 16:27:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:18.102 16:27:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:18.102 16:27:25 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.102 16:27:25 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.102 16:27:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.361 ************************************ 00:06:18.361 START TEST app_repeat 00:06:18.361 ************************************ 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1642130 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1642130' 00:06:18.361 Process app_repeat pid: 1642130 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:18.361 spdk_app_start Round 0 00:06:18.361 16:27:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1642130 /var/tmp/spdk-nbd.sock 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1642130 ']' 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.361 16:27:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.361 [2024-05-15 16:27:25.359520] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:18.361 [2024-05-15 16:27:25.359593] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642130 ] 00:06:18.361 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.361 [2024-05-15 16:27:25.425813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.361 [2024-05-15 16:27:25.512636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.361 [2024-05-15 16:27:25.512640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.619 16:27:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.619 16:27:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:18.619 16:27:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.876 Malloc0 00:06:18.876 16:27:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.134 Malloc1 00:06:19.134 16:27:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.134 16:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.391 /dev/nbd0 00:06:19.391 16:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.391 16:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.391 1+0 records in 00:06:19.391 1+0 records out 00:06:19.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179411 s, 22.8 MB/s 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:19.391 16:27:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:19.391 16:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.391 16:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.391 16:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.648 /dev/nbd1 00:06:19.648 16:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.648 16:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.648 1+0 records in 00:06:19.648 1+0 records out 00:06:19.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152852 s, 26.8 MB/s 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.648 16:27:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:19.649 16:27:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:19.649 16:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.649 16:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.649 16:27:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.649 16:27:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.649 16:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.906 { 00:06:19.906 "nbd_device": "/dev/nbd0", 00:06:19.906 "bdev_name": "Malloc0" 00:06:19.906 }, 00:06:19.906 { 00:06:19.906 "nbd_device": "/dev/nbd1", 00:06:19.906 "bdev_name": "Malloc1" 00:06:19.906 } 00:06:19.906 ]' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.906 { 00:06:19.906 "nbd_device": "/dev/nbd0", 00:06:19.906 "bdev_name": "Malloc0" 00:06:19.906 }, 00:06:19.906 { 00:06:19.906 "nbd_device": "/dev/nbd1", 00:06:19.906 "bdev_name": "Malloc1" 00:06:19.906 } 00:06:19.906 ]' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.906 /dev/nbd1' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.906 /dev/nbd1' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.906 16:27:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.907 256+0 records in 00:06:19.907 256+0 records out 00:06:19.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503044 s, 208 MB/s 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.907 256+0 records in 00:06:19.907 256+0 records out 00:06:19.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196959 s, 53.2 MB/s 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.907 256+0 records in 00:06:19.907 256+0 records out 00:06:19.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219446 s, 47.8 MB/s 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.907 16:27:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.164 16:27:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.421 16:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.679 16:27:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.679 16:27:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.936 16:27:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.193 [2024-05-15 16:27:28.360906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.451 [2024-05-15 16:27:28.447839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.452 [2024-05-15 16:27:28.447839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.452 [2024-05-15 16:27:28.509895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.452 [2024-05-15 16:27:28.509973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.981 16:27:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.981 16:27:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.981 spdk_app_start Round 1 00:06:23.981 16:27:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1642130 /var/tmp/spdk-nbd.sock 00:06:23.981 16:27:31 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1642130 ']' 00:06:23.981 16:27:31 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.981 16:27:31 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.981 16:27:31 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.981 16:27:31 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.981 16:27:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 16:27:31 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.239 16:27:31 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:24.239 16:27:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.497 Malloc0 00:06:24.497 16:27:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.756 Malloc1 00:06:24.756 16:27:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.756 16:27:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.014 /dev/nbd0 00:06:25.014 16:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.014 16:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.014 1+0 records in 00:06:25.014 1+0 records out 00:06:25.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169675 s, 24.1 MB/s 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:25.014 16:27:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:25.014 16:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.014 16:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.014 16:27:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.273 /dev/nbd1 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.273 1+0 records in 00:06:25.273 1+0 records out 00:06:25.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209905 s, 19.5 MB/s 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:25.273 16:27:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.273 16:27:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.531 { 00:06:25.531 "nbd_device": "/dev/nbd0", 00:06:25.531 "bdev_name": "Malloc0" 00:06:25.531 }, 00:06:25.531 { 00:06:25.531 "nbd_device": "/dev/nbd1", 00:06:25.531 "bdev_name": "Malloc1" 00:06:25.531 } 00:06:25.531 ]' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.531 { 00:06:25.531 "nbd_device": "/dev/nbd0", 00:06:25.531 "bdev_name": "Malloc0" 00:06:25.531 }, 00:06:25.531 { 00:06:25.531 "nbd_device": "/dev/nbd1", 00:06:25.531 "bdev_name": "Malloc1" 00:06:25.531 } 00:06:25.531 ]' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.531 /dev/nbd1' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.531 /dev/nbd1' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.531 256+0 records in 00:06:25.531 256+0 records out 00:06:25.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502771 s, 209 MB/s 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.531 256+0 records in 00:06:25.531 256+0 records out 00:06:25.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235043 s, 44.6 MB/s 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.531 16:27:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.797 256+0 records in 00:06:25.798 256+0 records out 00:06:25.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223857 s, 46.8 MB/s 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.798 16:27:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.059 16:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.315 16:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.315 16:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.315 16:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.572 16:27:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.572 16:27:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.829 16:27:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.087 [2024-05-15 16:27:34.082049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.087 [2024-05-15 16:27:34.169560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.087 [2024-05-15 16:27:34.169565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.087 [2024-05-15 16:27:34.232831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.087 [2024-05-15 16:27:34.232914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.365 16:27:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.365 16:27:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:30.365 spdk_app_start Round 2 00:06:30.365 16:27:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1642130 /var/tmp/spdk-nbd.sock 00:06:30.365 16:27:36 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1642130 ']' 00:06:30.365 16:27:36 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.365 16:27:36 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.365 16:27:36 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.365 16:27:36 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.365 16:27:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 16:27:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.365 16:27:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:30.365 16:27:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.365 Malloc0 00:06:30.365 16:27:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.622 Malloc1 00:06:30.622 16:27:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.622 16:27:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.879 /dev/nbd0 00:06:30.879 16:27:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.879 16:27:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.879 1+0 records in 00:06:30.879 1+0 records out 00:06:30.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000117869 s, 34.8 MB/s 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:30.879 16:27:37 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:30.879 16:27:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.879 16:27:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.879 16:27:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.137 /dev/nbd1 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.137 1+0 records in 00:06:31.137 1+0 records out 00:06:31.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179168 s, 22.9 MB/s 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:31.137 16:27:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.137 16:27:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.395 { 00:06:31.395 "nbd_device": "/dev/nbd0", 00:06:31.395 "bdev_name": "Malloc0" 00:06:31.395 }, 00:06:31.395 { 00:06:31.395 "nbd_device": "/dev/nbd1", 00:06:31.395 "bdev_name": "Malloc1" 00:06:31.395 } 00:06:31.395 ]' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.395 { 00:06:31.395 "nbd_device": "/dev/nbd0", 00:06:31.395 "bdev_name": "Malloc0" 00:06:31.395 }, 00:06:31.395 { 00:06:31.395 "nbd_device": "/dev/nbd1", 00:06:31.395 "bdev_name": "Malloc1" 00:06:31.395 } 00:06:31.395 ]' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.395 /dev/nbd1' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.395 /dev/nbd1' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.395 256+0 records in 00:06:31.395 256+0 records out 00:06:31.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525829 s, 199 MB/s 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.395 256+0 records in 00:06:31.395 256+0 records out 00:06:31.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203442 s, 51.5 MB/s 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.395 256+0 records in 00:06:31.395 256+0 records out 00:06:31.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251105 s, 41.8 MB/s 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.395 16:27:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.653 16:27:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.919 16:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.206 16:27:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.206 16:27:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.463 16:27:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.720 [2024-05-15 16:27:39.813807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.720 [2024-05-15 16:27:39.900864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.720 [2024-05-15 16:27:39.900869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.977 [2024-05-15 16:27:39.962773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.977 [2024-05-15 16:27:39.962850] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.502 16:27:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1642130 /var/tmp/spdk-nbd.sock 00:06:35.502 16:27:42 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1642130 ']' 00:06:35.502 16:27:42 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.502 16:27:42 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.502 16:27:42 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.502 16:27:42 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.502 16:27:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:35.760 16:27:42 event.app_repeat -- event/event.sh@39 -- # killprocess 1642130 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1642130 ']' 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1642130 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1642130 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1642130' 00:06:35.760 killing process with pid 1642130 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1642130 00:06:35.760 16:27:42 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1642130 00:06:36.017 spdk_app_start is called in Round 0. 00:06:36.017 Shutdown signal received, stop current app iteration 00:06:36.017 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 reinitialization... 00:06:36.017 spdk_app_start is called in Round 1. 00:06:36.017 Shutdown signal received, stop current app iteration 00:06:36.017 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 reinitialization... 00:06:36.017 spdk_app_start is called in Round 2. 00:06:36.017 Shutdown signal received, stop current app iteration 00:06:36.017 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 reinitialization... 00:06:36.017 spdk_app_start is called in Round 3. 00:06:36.017 Shutdown signal received, stop current app iteration 00:06:36.017 16:27:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.017 16:27:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.017 00:06:36.017 real 0m17.779s 00:06:36.017 user 0m39.207s 00:06:36.017 sys 0m3.292s 00:06:36.017 16:27:43 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.017 16:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 ************************************ 00:06:36.017 END TEST app_repeat 00:06:36.017 ************************************ 00:06:36.017 16:27:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.017 16:27:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.017 16:27:43 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.017 16:27:43 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.017 16:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 ************************************ 00:06:36.017 START TEST cpu_locks 00:06:36.017 ************************************ 00:06:36.017 16:27:43 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.017 * Looking for test storage... 00:06:36.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.017 16:27:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.017 16:27:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.017 16:27:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.017 16:27:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.017 16:27:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.017 16:27:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.017 16:27:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.275 ************************************ 00:06:36.275 START TEST default_locks 00:06:36.275 ************************************ 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1644477 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1644477 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1644477 ']' 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.275 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.275 [2024-05-15 16:27:43.293405] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:36.275 [2024-05-15 16:27:43.293485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644477 ] 00:06:36.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.275 [2024-05-15 16:27:43.363765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.275 [2024-05-15 16:27:43.450877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.533 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.533 16:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:36.533 16:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1644477 00:06:36.533 16:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1644477 00:06:36.533 16:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.097 lslocks: write error 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1644477 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1644477 ']' 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1644477 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1644477 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1644477' 00:06:37.097 killing process with pid 1644477 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1644477 00:06:37.097 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1644477 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1644477 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1644477 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1644477 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1644477 ']' 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1644477) - No such process 00:06:37.354 ERROR: process (pid: 1644477) is no longer running 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.354 00:06:37.354 real 0m1.273s 00:06:37.354 user 0m1.189s 00:06:37.354 sys 0m0.535s 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.354 16:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 ************************************ 00:06:37.354 END TEST default_locks 00:06:37.354 ************************************ 00:06:37.354 16:27:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.354 16:27:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.354 16:27:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.354 16:27:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 ************************************ 00:06:37.354 START TEST default_locks_via_rpc 00:06:37.354 ************************************ 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1644648 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1644648 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1644648 ']' 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.354 16:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.611 [2024-05-15 16:27:44.619716] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:37.611 [2024-05-15 16:27:44.619817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644648 ] 00:06:37.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.612 [2024-05-15 16:27:44.686057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.612 [2024-05-15 16:27:44.770599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1644648 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1644648 00:06:37.869 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1644648 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1644648 ']' 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1644648 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1644648 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1644648' 00:06:38.127 killing process with pid 1644648 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1644648 00:06:38.127 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1644648 00:06:38.691 00:06:38.691 real 0m1.178s 00:06:38.691 user 0m1.131s 00:06:38.691 sys 0m0.506s 00:06:38.691 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.691 16:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.691 ************************************ 00:06:38.691 END TEST default_locks_via_rpc 00:06:38.691 ************************************ 00:06:38.691 16:27:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.691 16:27:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.691 16:27:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.691 16:27:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.691 ************************************ 00:06:38.691 START TEST non_locking_app_on_locked_coremask 00:06:38.691 ************************************ 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1644808 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1644808 /var/tmp/spdk.sock 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1644808 ']' 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.691 16:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.691 [2024-05-15 16:27:45.857042] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:38.691 [2024-05-15 16:27:45.857123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644808 ] 00:06:38.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.949 [2024-05-15 16:27:45.929603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.949 [2024-05-15 16:27:46.019779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1644931 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1644931 /var/tmp/spdk2.sock 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1644931 ']' 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.206 16:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.206 [2024-05-15 16:27:46.324282] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:39.206 [2024-05-15 16:27:46.324351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644931 ] 00:06:39.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.206 [2024-05-15 16:27:46.423250] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.206 [2024-05-15 16:27:46.423321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.463 [2024-05-15 16:27:46.599640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.395 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.395 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:40.395 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1644808 00:06:40.395 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1644808 00:06:40.395 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.653 lslocks: write error 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1644808 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1644808 ']' 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1644808 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1644808 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1644808' 00:06:40.653 killing process with pid 1644808 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1644808 00:06:40.653 16:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1644808 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1644931 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1644931 ']' 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1644931 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1644931 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1644931' 00:06:41.584 killing process with pid 1644931 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1644931 00:06:41.584 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1644931 00:06:41.841 00:06:41.841 real 0m3.147s 00:06:41.841 user 0m3.265s 00:06:41.841 sys 0m1.092s 00:06:41.842 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.842 16:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.842 ************************************ 00:06:41.842 END TEST non_locking_app_on_locked_coremask 00:06:41.842 ************************************ 00:06:41.842 16:27:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.842 16:27:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.842 16:27:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.842 16:27:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.842 ************************************ 00:06:41.842 START TEST locking_app_on_unlocked_coremask 00:06:41.842 ************************************ 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1645241 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1645241 /var/tmp/spdk.sock 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1645241 ']' 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.842 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.842 [2024-05-15 16:27:49.056427] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:41.842 [2024-05-15 16:27:49.056525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645241 ] 00:06:42.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.101 [2024-05-15 16:27:49.124000] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.101 [2024-05-15 16:27:49.124041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.101 [2024-05-15 16:27:49.209656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1645364 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1645364 /var/tmp/spdk2.sock 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1645364 ']' 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.358 16:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.358 [2024-05-15 16:27:49.520028] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:42.358 [2024-05-15 16:27:49.520112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645364 ] 00:06:42.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.616 [2024-05-15 16:27:49.633768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.616 [2024-05-15 16:27:49.810338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.549 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.549 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.549 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1645364 00:06:43.549 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1645364 00:06:43.549 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.806 lslocks: write error 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1645241 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1645241 ']' 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1645241 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1645241 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1645241' 00:06:43.806 killing process with pid 1645241 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1645241 00:06:43.806 16:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1645241 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1645364 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1645364 ']' 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1645364 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1645364 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1645364' 00:06:44.738 killing process with pid 1645364 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1645364 00:06:44.738 16:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1645364 00:06:44.995 00:06:44.995 real 0m3.139s 00:06:44.995 user 0m3.274s 00:06:44.995 sys 0m1.039s 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.995 ************************************ 00:06:44.995 END TEST locking_app_on_unlocked_coremask 00:06:44.995 ************************************ 00:06:44.995 16:27:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.995 16:27:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.995 16:27:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.995 16:27:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.995 ************************************ 00:06:44.995 START TEST locking_app_on_locked_coremask 00:06:44.995 ************************************ 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1645671 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1645671 /var/tmp/spdk.sock 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1645671 ']' 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.995 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.253 [2024-05-15 16:27:52.246435] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:45.253 [2024-05-15 16:27:52.246510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645671 ] 00:06:45.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.253 [2024-05-15 16:27:52.317013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.253 [2024-05-15 16:27:52.402847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1645695 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1645695 /var/tmp/spdk2.sock 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1645695 /var/tmp/spdk2.sock 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1645695 /var/tmp/spdk2.sock 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1645695 ']' 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.511 16:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.511 [2024-05-15 16:27:52.708596] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:45.511 [2024-05-15 16:27:52.708675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645695 ] 00:06:45.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.769 [2024-05-15 16:27:52.820546] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1645671 has claimed it. 00:06:45.769 [2024-05-15 16:27:52.820596] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1645695) - No such process 00:06:46.332 ERROR: process (pid: 1645695) is no longer running 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1645671 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1645671 00:06:46.332 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.589 lslocks: write error 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1645671 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1645671 ']' 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1645671 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1645671 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1645671' 00:06:46.589 killing process with pid 1645671 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1645671 00:06:46.589 16:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1645671 00:06:47.154 00:06:47.154 real 0m1.908s 00:06:47.154 user 0m2.041s 00:06:47.154 sys 0m0.649s 00:06:47.154 16:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.154 16:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.154 ************************************ 00:06:47.154 END TEST locking_app_on_locked_coremask 00:06:47.154 ************************************ 00:06:47.154 16:27:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.154 16:27:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.154 16:27:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.154 16:27:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.154 ************************************ 00:06:47.154 START TEST locking_overlapped_coremask 00:06:47.154 ************************************ 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1645969 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1645969 /var/tmp/spdk.sock 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1645969 ']' 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.154 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.154 [2024-05-15 16:27:54.213494] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:47.154 [2024-05-15 16:27:54.213579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645969 ] 00:06:47.154 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.154 [2024-05-15 16:27:54.284491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.154 [2024-05-15 16:27:54.370848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.154 [2024-05-15 16:27:54.370917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.154 [2024-05-15 16:27:54.370920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1645981 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1645981 /var/tmp/spdk2.sock 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1645981 /var/tmp/spdk2.sock 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1645981 /var/tmp/spdk2.sock 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1645981 ']' 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.441 16:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.699 [2024-05-15 16:27:54.671323] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:47.699 [2024-05-15 16:27:54.671402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645981 ] 00:06:47.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.699 [2024-05-15 16:27:54.773984] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1645969 has claimed it. 00:06:47.699 [2024-05-15 16:27:54.774058] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1645981) - No such process 00:06:48.264 ERROR: process (pid: 1645981) is no longer running 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1645969 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1645969 ']' 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1645969 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1645969 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1645969' 00:06:48.264 killing process with pid 1645969 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1645969 00:06:48.264 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1645969 00:06:48.829 00:06:48.829 real 0m1.639s 00:06:48.829 user 0m4.420s 00:06:48.829 sys 0m0.458s 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.829 ************************************ 00:06:48.829 END TEST locking_overlapped_coremask 00:06:48.829 ************************************ 00:06:48.829 16:27:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.829 16:27:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.829 16:27:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.829 16:27:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.829 ************************************ 00:06:48.829 START TEST locking_overlapped_coremask_via_rpc 00:06:48.829 ************************************ 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1646148 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1646148 /var/tmp/spdk.sock 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1646148 ']' 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.829 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.830 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.830 16:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.830 [2024-05-15 16:27:55.904154] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:48.830 [2024-05-15 16:27:55.904245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646148 ] 00:06:48.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.830 [2024-05-15 16:27:55.977223] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.830 [2024-05-15 16:27:55.977268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.088 [2024-05-15 16:27:56.075238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.088 [2024-05-15 16:27:56.075261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.088 [2024-05-15 16:27:56.075264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1646276 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1646276 /var/tmp/spdk2.sock 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1646276 ']' 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.088 16:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.345 [2024-05-15 16:27:56.356458] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:49.345 [2024-05-15 16:27:56.356550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646276 ] 00:06:49.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.345 [2024-05-15 16:27:56.455880] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.345 [2024-05-15 16:27:56.455918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.603 [2024-05-15 16:27:56.626888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.603 [2024-05-15 16:27:56.630283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:49.603 [2024-05-15 16:27:56.630286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.167 [2024-05-15 16:27:57.313313] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1646148 has claimed it. 00:06:50.167 request: 00:06:50.167 { 00:06:50.167 "method": "framework_enable_cpumask_locks", 00:06:50.167 "req_id": 1 00:06:50.167 } 00:06:50.167 Got JSON-RPC error response 00:06:50.167 response: 00:06:50.167 { 00:06:50.167 "code": -32603, 00:06:50.167 "message": "Failed to claim CPU core: 2" 00:06:50.167 } 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1646148 /var/tmp/spdk.sock 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1646148 ']' 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.167 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.168 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1646276 /var/tmp/spdk2.sock 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1646276 ']' 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.425 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.681 00:06:50.681 real 0m1.947s 00:06:50.681 user 0m1.046s 00:06:50.681 sys 0m0.163s 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.681 16:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.681 ************************************ 00:06:50.681 END TEST locking_overlapped_coremask_via_rpc 00:06:50.681 ************************************ 00:06:50.681 16:27:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.681 16:27:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1646148 ]] 00:06:50.681 16:27:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1646148 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1646148 ']' 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1646148 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1646148 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1646148' 00:06:50.681 killing process with pid 1646148 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1646148 00:06:50.681 16:27:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1646148 00:06:51.245 16:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1646276 ]] 00:06:51.245 16:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1646276 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1646276 ']' 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1646276 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1646276 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1646276' 00:06:51.245 killing process with pid 1646276 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1646276 00:06:51.245 16:27:58 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1646276 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1646148 ]] 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1646148 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1646148 ']' 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1646148 00:06:51.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1646148) - No such process 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1646148 is not found' 00:06:51.503 Process with pid 1646148 is not found 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1646276 ]] 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1646276 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1646276 ']' 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1646276 00:06:51.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1646276) - No such process 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1646276 is not found' 00:06:51.503 Process with pid 1646276 is not found 00:06:51.503 16:27:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.503 00:06:51.503 real 0m15.513s 00:06:51.503 user 0m26.972s 00:06:51.503 sys 0m5.360s 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.503 16:27:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.503 ************************************ 00:06:51.503 END TEST cpu_locks 00:06:51.503 ************************************ 00:06:51.503 00:06:51.503 real 0m41.939s 00:06:51.503 user 1m20.792s 00:06:51.503 sys 0m9.526s 00:06:51.503 16:27:58 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.503 16:27:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.503 ************************************ 00:06:51.503 END TEST event 00:06:51.503 ************************************ 00:06:51.503 16:27:58 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:51.503 16:27:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.503 16:27:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.761 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 ************************************ 00:06:51.761 START TEST thread 00:06:51.761 ************************************ 00:06:51.761 16:27:58 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:51.761 * Looking for test storage... 00:06:51.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:51.761 16:27:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.761 16:27:58 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:51.761 16:27:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.761 16:27:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 ************************************ 00:06:51.761 START TEST thread_poller_perf 00:06:51.761 ************************************ 00:06:51.761 16:27:58 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.761 [2024-05-15 16:27:58.857356] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:51.761 [2024-05-15 16:27:58.857414] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646643 ] 00:06:51.761 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.761 [2024-05-15 16:27:58.928029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.018 [2024-05-15 16:27:59.014674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.018 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.949 ====================================== 00:06:52.949 busy:2707825073 (cyc) 00:06:52.949 total_run_count: 291000 00:06:52.949 tsc_hz: 2700000000 (cyc) 00:06:52.949 ====================================== 00:06:52.949 poller_cost: 9305 (cyc), 3446 (nsec) 00:06:52.949 00:06:52.949 real 0m1.258s 00:06:52.949 user 0m1.166s 00:06:52.949 sys 0m0.086s 00:06:52.949 16:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.949 16:28:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.949 ************************************ 00:06:52.949 END TEST thread_poller_perf 00:06:52.949 ************************************ 00:06:52.949 16:28:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.949 16:28:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:52.949 16:28:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.949 16:28:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.949 ************************************ 00:06:52.949 START TEST thread_poller_perf 00:06:52.949 ************************************ 00:06:52.949 16:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.949 [2024-05-15 16:28:00.172944] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:52.949 [2024-05-15 16:28:00.173012] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646797 ] 00:06:53.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.207 [2024-05-15 16:28:00.246603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.207 [2024-05-15 16:28:00.336753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.207 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.577 ====================================== 00:06:54.577 busy:2702785935 (cyc) 00:06:54.577 total_run_count: 3852000 00:06:54.577 tsc_hz: 2700000000 (cyc) 00:06:54.577 ====================================== 00:06:54.577 poller_cost: 701 (cyc), 259 (nsec) 00:06:54.577 00:06:54.577 real 0m1.261s 00:06:54.577 user 0m1.157s 00:06:54.577 sys 0m0.098s 00:06:54.577 16:28:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.577 16:28:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.577 ************************************ 00:06:54.577 END TEST thread_poller_perf 00:06:54.577 ************************************ 00:06:54.577 16:28:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:54.577 00:06:54.577 real 0m2.683s 00:06:54.577 user 0m2.385s 00:06:54.577 sys 0m0.293s 00:06:54.577 16:28:01 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.577 16:28:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.577 ************************************ 00:06:54.577 END TEST thread 00:06:54.577 ************************************ 00:06:54.577 16:28:01 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:54.577 16:28:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.577 16:28:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.577 16:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.577 ************************************ 00:06:54.577 START TEST accel 00:06:54.577 ************************************ 00:06:54.577 16:28:01 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:54.577 * Looking for test storage... 00:06:54.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:54.577 16:28:01 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:54.577 16:28:01 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:54.577 16:28:01 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.577 16:28:01 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1646990 00:06:54.577 16:28:01 accel -- accel/accel.sh@63 -- # waitforlisten 1646990 00:06:54.577 16:28:01 accel -- common/autotest_common.sh@827 -- # '[' -z 1646990 ']' 00:06:54.577 16:28:01 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.577 16:28:01 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:54.577 16:28:01 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:54.577 16:28:01 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.577 16:28:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.577 16:28:01 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.578 16:28:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.578 16:28:01 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.578 16:28:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.578 16:28:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 16:28:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.578 16:28:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.578 16:28:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:54.578 16:28:01 accel -- accel/accel.sh@41 -- # jq -r . 00:06:54.578 [2024-05-15 16:28:01.600011] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:54.578 [2024-05-15 16:28:01.600088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646990 ] 00:06:54.578 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.578 [2024-05-15 16:28:01.670682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.578 [2024-05-15 16:28:01.757240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@860 -- # return 0 00:06:54.836 16:28:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:54.836 16:28:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:54.836 16:28:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:54.836 16:28:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:54.836 16:28:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:54.836 16:28:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:54.836 16:28:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.836 16:28:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.836 16:28:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.836 16:28:02 accel -- accel/accel.sh@75 -- # killprocess 1646990 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@946 -- # '[' -z 1646990 ']' 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@950 -- # kill -0 1646990 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@951 -- # uname 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.836 16:28:02 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1646990 00:06:55.094 16:28:02 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.094 16:28:02 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.094 16:28:02 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1646990' 00:06:55.094 killing process with pid 1646990 00:06:55.094 16:28:02 accel -- common/autotest_common.sh@965 -- # kill 1646990 00:06:55.094 16:28:02 accel -- common/autotest_common.sh@970 -- # wait 1646990 00:06:55.351 16:28:02 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:55.351 16:28:02 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:55.351 16:28:02 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:55.351 16:28:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.351 16:28:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.351 16:28:02 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:55.351 16:28:02 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:55.351 16:28:02 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.351 16:28:02 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:55.351 16:28:02 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:55.351 16:28:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:55.351 16:28:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.351 16:28:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.609 ************************************ 00:06:55.609 START TEST accel_missing_filename 00:06:55.609 ************************************ 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.609 16:28:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:55.609 16:28:02 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:55.609 [2024-05-15 16:28:02.622303] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:55.609 [2024-05-15 16:28:02.622361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647158 ] 00:06:55.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.609 [2024-05-15 16:28:02.695803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.609 [2024-05-15 16:28:02.785735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.876 [2024-05-15 16:28:02.846823] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.876 [2024-05-15 16:28:02.924310] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:55.876 A filename is required. 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.876 00:06:55.876 real 0m0.399s 00:06:55.876 user 0m0.286s 00:06:55.876 sys 0m0.147s 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.876 16:28:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:55.876 ************************************ 00:06:55.876 END TEST accel_missing_filename 00:06:55.876 ************************************ 00:06:55.876 16:28:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.876 16:28:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:55.876 16:28:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.876 16:28:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.876 ************************************ 00:06:55.876 START TEST accel_compress_verify 00:06:55.876 ************************************ 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.876 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:55.876 16:28:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:55.876 [2024-05-15 16:28:03.070660] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:55.876 [2024-05-15 16:28:03.070725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647309 ] 00:06:56.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.135 [2024-05-15 16:28:03.144282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.135 [2024-05-15 16:28:03.234808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.135 [2024-05-15 16:28:03.297651] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.393 [2024-05-15 16:28:03.383095] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:56.393 00:06:56.393 Compression does not support the verify option, aborting. 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.393 00:06:56.393 real 0m0.410s 00:06:56.393 user 0m0.292s 00:06:56.393 sys 0m0.151s 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.393 16:28:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:56.393 ************************************ 00:06:56.393 END TEST accel_compress_verify 00:06:56.393 ************************************ 00:06:56.393 16:28:03 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:56.393 16:28:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.393 16:28:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.393 16:28:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.393 ************************************ 00:06:56.393 START TEST accel_wrong_workload 00:06:56.393 ************************************ 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:56.393 16:28:03 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:56.393 Unsupported workload type: foobar 00:06:56.393 [2024-05-15 16:28:03.529957] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:56.393 accel_perf options: 00:06:56.393 [-h help message] 00:06:56.393 [-q queue depth per core] 00:06:56.393 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.393 [-T number of threads per core 00:06:56.393 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.393 [-t time in seconds] 00:06:56.393 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.393 [ dif_verify, , dif_generate, dif_generate_copy 00:06:56.393 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.393 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.393 [-S for crc32c workload, use this seed value (default 0) 00:06:56.393 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.393 [-f for fill workload, use this BYTE value (default 255) 00:06:56.393 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.393 [-y verify result if this switch is on] 00:06:56.393 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.393 Can be used to spread operations across a wider range of memory. 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.393 00:06:56.393 real 0m0.021s 00:06:56.393 user 0m0.010s 00:06:56.393 sys 0m0.011s 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.393 16:28:03 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:56.393 ************************************ 00:06:56.393 END TEST accel_wrong_workload 00:06:56.393 ************************************ 00:06:56.393 Error: writing output failed: Broken pipe 00:06:56.393 16:28:03 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.393 16:28:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:56.393 16:28:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.393 16:28:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.393 ************************************ 00:06:56.393 START TEST accel_negative_buffers 00:06:56.393 ************************************ 00:06:56.393 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.393 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:56.394 16:28:03 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:56.394 -x option must be non-negative. 00:06:56.394 [2024-05-15 16:28:03.606742] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:56.394 accel_perf options: 00:06:56.394 [-h help message] 00:06:56.394 [-q queue depth per core] 00:06:56.394 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.394 [-T number of threads per core 00:06:56.394 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.394 [-t time in seconds] 00:06:56.394 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.394 [ dif_verify, , dif_generate, dif_generate_copy 00:06:56.394 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.394 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.394 [-S for crc32c workload, use this seed value (default 0) 00:06:56.394 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.394 [-f for fill workload, use this BYTE value (default 255) 00:06:56.394 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.394 [-y verify result if this switch is on] 00:06:56.394 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.394 Can be used to spread operations across a wider range of memory. 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.394 00:06:56.394 real 0m0.024s 00:06:56.394 user 0m0.012s 00:06:56.394 sys 0m0.011s 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.394 16:28:03 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:56.394 ************************************ 00:06:56.394 END TEST accel_negative_buffers 00:06:56.394 ************************************ 00:06:56.652 Error: writing output failed: Broken pipe 00:06:56.652 16:28:03 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:56.652 16:28:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:56.652 16:28:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.652 16:28:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.652 ************************************ 00:06:56.652 START TEST accel_crc32c 00:06:56.652 ************************************ 00:06:56.652 16:28:03 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:56.652 16:28:03 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:56.652 [2024-05-15 16:28:03.675658] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:56.652 [2024-05-15 16:28:03.675723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647369 ] 00:06:56.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.652 [2024-05-15 16:28:03.749435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.652 [2024-05-15 16:28:03.839932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.909 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.910 16:28:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:57.841 16:28:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.841 00:06:57.841 real 0m1.407s 00:06:57.841 user 0m1.253s 00:06:57.841 sys 0m0.155s 00:06:57.841 16:28:05 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.841 16:28:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:57.841 ************************************ 00:06:57.841 END TEST accel_crc32c 00:06:57.841 ************************************ 00:06:58.098 16:28:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:58.098 16:28:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:58.098 16:28:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.098 16:28:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.098 ************************************ 00:06:58.098 START TEST accel_crc32c_C2 00:06:58.098 ************************************ 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.098 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.098 [2024-05-15 16:28:05.130315] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:58.098 [2024-05-15 16:28:05.130371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647536 ] 00:06:58.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.099 [2024-05-15 16:28:05.201669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.099 [2024-05-15 16:28:05.293512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.356 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.357 16:28:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.289 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.290 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.547 00:06:59.547 real 0m1.403s 00:06:59.547 user 0m1.252s 00:06:59.547 sys 0m0.152s 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.547 16:28:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.547 ************************************ 00:06:59.547 END TEST accel_crc32c_C2 00:06:59.548 ************************************ 00:06:59.548 16:28:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.548 16:28:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.548 16:28:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.548 16:28:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.548 ************************************ 00:06:59.548 START TEST accel_copy 00:06:59.548 ************************************ 00:06:59.548 16:28:06 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:59.548 16:28:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:59.548 [2024-05-15 16:28:06.588988] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:06:59.548 [2024-05-15 16:28:06.589055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647805 ] 00:06:59.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.548 [2024-05-15 16:28:06.661621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.548 [2024-05-15 16:28:06.751945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.805 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.806 16:28:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:01.175 16:28:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.175 00:07:01.175 real 0m1.420s 00:07:01.175 user 0m1.264s 00:07:01.175 sys 0m0.158s 00:07:01.175 16:28:07 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.175 16:28:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 ************************************ 00:07:01.175 END TEST accel_copy 00:07:01.175 ************************************ 00:07:01.175 16:28:08 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.175 16:28:08 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.175 16:28:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.175 16:28:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 ************************************ 00:07:01.175 START TEST accel_fill 00:07:01.175 ************************************ 00:07:01.175 16:28:08 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:01.175 16:28:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:01.175 [2024-05-15 16:28:08.057710] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:01.175 [2024-05-15 16:28:08.057775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647962 ] 00:07:01.175 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.175 [2024-05-15 16:28:08.127799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.175 [2024-05-15 16:28:08.218124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.176 16:28:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:02.571 16:28:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.571 00:07:02.571 real 0m1.417s 00:07:02.571 user 0m1.265s 00:07:02.571 sys 0m0.155s 00:07:02.571 16:28:09 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.571 16:28:09 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:02.571 ************************************ 00:07:02.571 END TEST accel_fill 00:07:02.571 ************************************ 00:07:02.571 16:28:09 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.571 16:28:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.571 16:28:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.571 16:28:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.571 ************************************ 00:07:02.571 START TEST accel_copy_crc32c 00:07:02.571 ************************************ 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:02.571 [2024-05-15 16:28:09.518657] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:02.571 [2024-05-15 16:28:09.518718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648115 ] 00:07:02.571 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.571 [2024-05-15 16:28:09.591912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.571 [2024-05-15 16:28:09.681563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.571 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.572 16:28:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.954 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.955 00:07:03.955 real 0m1.412s 00:07:03.955 user 0m1.255s 00:07:03.955 sys 0m0.159s 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.955 16:28:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:03.955 ************************************ 00:07:03.955 END TEST accel_copy_crc32c 00:07:03.955 ************************************ 00:07:03.955 16:28:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.955 16:28:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:03.955 16:28:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.955 16:28:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.955 ************************************ 00:07:03.955 START TEST accel_copy_crc32c_C2 00:07:03.955 ************************************ 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.955 16:28:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:03.955 [2024-05-15 16:28:10.977568] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:03.955 [2024-05-15 16:28:10.977631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648396 ] 00:07:03.955 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.955 [2024-05-15 16:28:11.048732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.955 [2024-05-15 16:28:11.135356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.212 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.213 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.213 16:28:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.145 00:07:05.145 real 0m1.402s 00:07:05.145 user 0m1.260s 00:07:05.145 sys 0m0.145s 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.145 16:28:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.145 ************************************ 00:07:05.145 END TEST accel_copy_crc32c_C2 00:07:05.145 ************************************ 00:07:05.403 16:28:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.403 16:28:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:05.403 16:28:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.403 16:28:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.403 ************************************ 00:07:05.403 START TEST accel_dualcast 00:07:05.403 ************************************ 00:07:05.403 16:28:12 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:05.403 16:28:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:05.403 [2024-05-15 16:28:12.438431] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:05.404 [2024-05-15 16:28:12.438493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648554 ] 00:07:05.404 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.404 [2024-05-15 16:28:12.510833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.404 [2024-05-15 16:28:12.600573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 16:28:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:06.615 16:28:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.615 00:07:06.615 real 0m1.418s 00:07:06.615 user 0m1.268s 00:07:06.615 sys 0m0.152s 00:07:06.615 16:28:13 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.615 16:28:13 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:06.615 ************************************ 00:07:06.615 END TEST accel_dualcast 00:07:06.615 ************************************ 00:07:06.873 16:28:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.874 16:28:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:06.874 16:28:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.874 16:28:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.874 ************************************ 00:07:06.874 START TEST accel_compare 00:07:06.874 ************************************ 00:07:06.874 16:28:13 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:06.874 16:28:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:06.874 [2024-05-15 16:28:13.912787] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:06.874 [2024-05-15 16:28:13.912852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648711 ] 00:07:06.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.874 [2024-05-15 16:28:13.985994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.874 [2024-05-15 16:28:14.074399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.132 16:28:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:08.506 16:28:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.506 00:07:08.506 real 0m1.415s 00:07:08.506 user 0m1.254s 00:07:08.506 sys 0m0.163s 00:07:08.506 16:28:15 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.506 16:28:15 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:08.506 ************************************ 00:07:08.506 END TEST accel_compare 00:07:08.506 ************************************ 00:07:08.506 16:28:15 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.506 16:28:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:08.506 16:28:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.506 16:28:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.506 ************************************ 00:07:08.506 START TEST accel_xor 00:07:08.506 ************************************ 00:07:08.506 16:28:15 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.506 16:28:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:08.507 [2024-05-15 16:28:15.374386] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:08.507 [2024-05-15 16:28:15.374442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648931 ] 00:07:08.507 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.507 [2024-05-15 16:28:15.445314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.507 [2024-05-15 16:28:15.535920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.507 16:28:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.877 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.878 00:07:09.878 real 0m1.413s 00:07:09.878 user 0m0.009s 00:07:09.878 sys 0m0.003s 00:07:09.878 16:28:16 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.878 16:28:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:09.878 ************************************ 00:07:09.878 END TEST accel_xor 00:07:09.878 ************************************ 00:07:09.878 16:28:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:09.878 16:28:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:09.878 16:28:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.878 16:28:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.878 ************************************ 00:07:09.878 START TEST accel_xor 00:07:09.878 ************************************ 00:07:09.878 16:28:16 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.878 16:28:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.878 [2024-05-15 16:28:16.843925] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:09.878 [2024-05-15 16:28:16.843992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649139 ] 00:07:09.878 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.878 [2024-05-15 16:28:16.916208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.878 [2024-05-15 16:28:17.004578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.878 16:28:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:11.251 16:28:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.251 00:07:11.251 real 0m1.417s 00:07:11.251 user 0m1.261s 00:07:11.251 sys 0m0.158s 00:07:11.251 16:28:18 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.251 16:28:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:11.251 ************************************ 00:07:11.251 END TEST accel_xor 00:07:11.251 ************************************ 00:07:11.251 16:28:18 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:11.251 16:28:18 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:11.251 16:28:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.251 16:28:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.251 ************************************ 00:07:11.251 START TEST accel_dif_verify 00:07:11.251 ************************************ 00:07:11.252 16:28:18 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:11.252 16:28:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:11.252 [2024-05-15 16:28:18.311751] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:11.252 [2024-05-15 16:28:18.311816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649299 ] 00:07:11.252 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.252 [2024-05-15 16:28:18.384024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.252 [2024-05-15 16:28:18.474781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.510 16:28:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:12.883 16:28:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.883 00:07:12.883 real 0m1.421s 00:07:12.883 user 0m1.276s 00:07:12.883 sys 0m0.150s 00:07:12.883 16:28:19 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.883 16:28:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:12.883 ************************************ 00:07:12.883 END TEST accel_dif_verify 00:07:12.883 ************************************ 00:07:12.883 16:28:19 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:12.883 16:28:19 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:12.883 16:28:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.883 16:28:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.883 ************************************ 00:07:12.883 START TEST accel_dif_generate 00:07:12.883 ************************************ 00:07:12.883 16:28:19 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:12.883 16:28:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:12.883 [2024-05-15 16:28:19.783980] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:12.883 [2024-05-15 16:28:19.784043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649450 ] 00:07:12.883 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.883 [2024-05-15 16:28:19.854943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.883 [2024-05-15 16:28:19.945561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.883 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.884 16:28:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:14.256 16:28:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.256 00:07:14.256 real 0m1.403s 00:07:14.256 user 0m1.252s 00:07:14.256 sys 0m0.154s 00:07:14.256 16:28:21 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.256 16:28:21 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 ************************************ 00:07:14.256 END TEST accel_dif_generate 00:07:14.256 ************************************ 00:07:14.256 16:28:21 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:14.256 16:28:21 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:14.256 16:28:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.256 16:28:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 ************************************ 00:07:14.256 START TEST accel_dif_generate_copy 00:07:14.256 ************************************ 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:14.256 [2024-05-15 16:28:21.239636] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:14.256 [2024-05-15 16:28:21.239697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649729 ] 00:07:14.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.256 [2024-05-15 16:28:21.309570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.256 [2024-05-15 16:28:21.400142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.256 16:28:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.628 00:07:15.628 real 0m1.400s 00:07:15.628 user 0m1.257s 00:07:15.628 sys 0m0.145s 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.628 16:28:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.628 ************************************ 00:07:15.628 END TEST accel_dif_generate_copy 00:07:15.628 ************************************ 00:07:15.628 16:28:22 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:15.628 16:28:22 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.628 16:28:22 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:15.628 16:28:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.628 16:28:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.628 ************************************ 00:07:15.628 START TEST accel_comp 00:07:15.628 ************************************ 00:07:15.628 16:28:22 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:15.628 16:28:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:15.628 [2024-05-15 16:28:22.695900] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:15.628 [2024-05-15 16:28:22.695966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649885 ] 00:07:15.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.628 [2024-05-15 16:28:22.771789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.885 [2024-05-15 16:28:22.861852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.885 16:28:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:17.256 16:28:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.256 00:07:17.256 real 0m1.430s 00:07:17.256 user 0m1.278s 00:07:17.256 sys 0m0.156s 00:07:17.256 16:28:24 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.256 16:28:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:17.256 ************************************ 00:07:17.256 END TEST accel_comp 00:07:17.256 ************************************ 00:07:17.256 16:28:24 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.256 16:28:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:17.256 16:28:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.256 16:28:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.256 ************************************ 00:07:17.256 START TEST accel_decomp 00:07:17.256 ************************************ 00:07:17.256 16:28:24 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:17.256 [2024-05-15 16:28:24.174666] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:17.256 [2024-05-15 16:28:24.174728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650044 ] 00:07:17.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.256 [2024-05-15 16:28:24.244467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.256 [2024-05-15 16:28:24.335117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.256 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 16:28:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.628 16:28:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.628 00:07:18.628 real 0m1.410s 00:07:18.628 user 0m1.261s 00:07:18.628 sys 0m0.153s 00:07:18.628 16:28:25 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.628 16:28:25 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:18.628 ************************************ 00:07:18.628 END TEST accel_decomp 00:07:18.628 ************************************ 00:07:18.628 16:28:25 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.628 16:28:25 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:18.628 16:28:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.628 16:28:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.628 ************************************ 00:07:18.628 START TEST accel_decmop_full 00:07:18.628 ************************************ 00:07:18.628 16:28:25 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.628 16:28:25 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:18.628 16:28:25 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:18.628 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.628 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.628 16:28:25 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.628 16:28:25 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:18.629 16:28:25 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:18.629 [2024-05-15 16:28:25.630654] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:18.629 [2024-05-15 16:28:25.630721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650297 ] 00:07:18.629 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.629 [2024-05-15 16:28:25.700702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.629 [2024-05-15 16:28:25.792477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.888 16:28:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.853 16:28:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.853 16:28:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.854 16:28:27 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.854 00:07:19.854 real 0m1.428s 00:07:19.854 user 0m1.269s 00:07:19.854 sys 0m0.163s 00:07:19.854 16:28:27 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.854 16:28:27 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:19.854 ************************************ 00:07:19.854 END TEST accel_decmop_full 00:07:19.854 ************************************ 00:07:20.112 16:28:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.112 16:28:27 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:20.112 16:28:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.112 16:28:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 ************************************ 00:07:20.112 START TEST accel_decomp_mcore 00:07:20.112 ************************************ 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:20.112 [2024-05-15 16:28:27.110795] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:20.112 [2024-05-15 16:28:27.110853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650476 ] 00:07:20.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.112 [2024-05-15 16:28:27.181622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.112 [2024-05-15 16:28:27.272956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.112 [2024-05-15 16:28:27.273009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.112 [2024-05-15 16:28:27.273126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.112 [2024-05-15 16:28:27.273129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.112 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.369 16:28:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.299 00:07:21.299 real 0m1.406s 00:07:21.299 user 0m4.667s 00:07:21.299 sys 0m0.152s 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.299 16:28:28 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:21.299 ************************************ 00:07:21.299 END TEST accel_decomp_mcore 00:07:21.299 ************************************ 00:07:21.299 16:28:28 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.299 16:28:28 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:21.299 16:28:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.299 16:28:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.556 ************************************ 00:07:21.557 START TEST accel_decomp_full_mcore 00:07:21.557 ************************************ 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.557 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.557 [2024-05-15 16:28:28.568735] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:21.557 [2024-05-15 16:28:28.568797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650635 ] 00:07:21.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.557 [2024-05-15 16:28:28.638313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.557 [2024-05-15 16:28:28.731788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.557 [2024-05-15 16:28:28.731847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.557 [2024-05-15 16:28:28.731961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.557 [2024-05-15 16:28:28.731963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.815 16:28:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.188 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.189 00:07:23.189 real 0m1.435s 00:07:23.189 user 0m4.759s 00:07:23.189 sys 0m0.169s 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.189 16:28:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:23.189 ************************************ 00:07:23.189 END TEST accel_decomp_full_mcore 00:07:23.189 ************************************ 00:07:23.189 16:28:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.189 16:28:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:23.189 16:28:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.189 16:28:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.189 ************************************ 00:07:23.189 START TEST accel_decomp_mthread 00:07:23.189 ************************************ 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:23.189 [2024-05-15 16:28:30.055954] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:23.189 [2024-05-15 16:28:30.056023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650797 ] 00:07:23.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.189 [2024-05-15 16:28:30.131067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.189 [2024-05-15 16:28:30.223597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.189 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.190 16:28:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.563 00:07:24.563 real 0m1.418s 00:07:24.563 user 0m1.254s 00:07:24.563 sys 0m0.167s 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.563 16:28:31 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 ************************************ 00:07:24.563 END TEST accel_decomp_mthread 00:07:24.563 ************************************ 00:07:24.563 16:28:31 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.563 16:28:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:24.563 16:28:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.563 16:28:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 ************************************ 00:07:24.563 START TEST accel_decomp_full_mthread 00:07:24.563 ************************************ 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.563 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.563 [2024-05-15 16:28:31.525982] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:24.563 [2024-05-15 16:28:31.526044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651068 ] 00:07:24.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.563 [2024-05-15 16:28:31.598304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.563 [2024-05-15 16:28:31.688642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 16:28:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.937 00:07:25.937 real 0m1.452s 00:07:25.937 user 0m1.294s 00:07:25.937 sys 0m0.161s 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.937 16:28:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.937 ************************************ 00:07:25.937 END TEST accel_decomp_full_mthread 00:07:25.937 ************************************ 00:07:25.937 16:28:32 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:25.937 16:28:32 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:25.937 16:28:32 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:25.937 16:28:32 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:25.937 16:28:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.937 16:28:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.937 16:28:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.937 16:28:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.937 16:28:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.937 16:28:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.937 16:28:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.937 16:28:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.937 16:28:32 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.937 ************************************ 00:07:25.937 START TEST accel_dif_functional_tests 00:07:25.937 ************************************ 00:07:25.937 16:28:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:25.937 [2024-05-15 16:28:33.046546] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:25.937 [2024-05-15 16:28:33.046608] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651226 ] 00:07:25.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.937 [2024-05-15 16:28:33.117131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.194 [2024-05-15 16:28:33.210390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.194 [2024-05-15 16:28:33.210454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.194 [2024-05-15 16:28:33.210471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.194 00:07:26.194 00:07:26.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.194 http://cunit.sourceforge.net/ 00:07:26.194 00:07:26.194 00:07:26.194 Suite: accel_dif 00:07:26.194 Test: verify: DIF generated, GUARD check ...passed 00:07:26.194 Test: verify: DIF generated, APPTAG check ...passed 00:07:26.194 Test: verify: DIF generated, REFTAG check ...passed 00:07:26.194 Test: verify: DIF not generated, GUARD check ...[2024-05-15 16:28:33.303861] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.194 [2024-05-15 16:28:33.303938] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.194 passed 00:07:26.194 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 16:28:33.303975] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.194 [2024-05-15 16:28:33.304001] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.194 passed 00:07:26.194 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 16:28:33.304046] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.194 [2024-05-15 16:28:33.304073] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.194 passed 00:07:26.194 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:26.194 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 16:28:33.304135] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:26.194 passed 00:07:26.194 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:26.194 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:26.194 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:26.194 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 16:28:33.304298] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:26.194 passed 00:07:26.194 Test: generate copy: DIF generated, GUARD check ...passed 00:07:26.194 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:26.194 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:26.194 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:26.194 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:26.194 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:26.194 Test: generate copy: iovecs-len validate ...[2024-05-15 16:28:33.304549] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:26.194 passed 00:07:26.194 Test: generate copy: buffer alignment validate ...passed 00:07:26.194 00:07:26.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.194 suites 1 1 n/a 0 0 00:07:26.195 tests 20 20 20 0 0 00:07:26.195 asserts 204 204 204 0 n/a 00:07:26.195 00:07:26.195 Elapsed time = 0.003 seconds 00:07:26.453 00:07:26.453 real 0m0.507s 00:07:26.453 user 0m0.790s 00:07:26.453 sys 0m0.184s 00:07:26.453 16:28:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.453 16:28:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:26.453 ************************************ 00:07:26.453 END TEST accel_dif_functional_tests 00:07:26.453 ************************************ 00:07:26.453 00:07:26.453 real 0m32.041s 00:07:26.453 user 0m35.103s 00:07:26.453 sys 0m4.866s 00:07:26.453 16:28:33 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.453 16:28:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.453 ************************************ 00:07:26.453 END TEST accel 00:07:26.453 ************************************ 00:07:26.453 16:28:33 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:26.453 16:28:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.453 16:28:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.453 16:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:26.453 ************************************ 00:07:26.453 START TEST accel_rpc 00:07:26.453 ************************************ 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:26.453 * Looking for test storage... 00:07:26.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:26.453 16:28:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:26.453 16:28:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1651412 00:07:26.453 16:28:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:26.453 16:28:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1651412 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1651412 ']' 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.453 16:28:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.711 [2024-05-15 16:28:33.688986] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:26.711 [2024-05-15 16:28:33.689068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651412 ] 00:07:26.711 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.711 [2024-05-15 16:28:33.755636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.711 [2024-05-15 16:28:33.836786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.711 16:28:33 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:26.711 16:28:33 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:26.711 16:28:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:26.711 16:28:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:26.711 16:28:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:26.711 16:28:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:26.711 16:28:33 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:26.711 16:28:33 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.711 16:28:33 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.711 16:28:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.711 ************************************ 00:07:26.711 START TEST accel_assign_opcode 00:07:26.711 ************************************ 00:07:26.711 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.712 [2024-05-15 16:28:33.917473] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.712 [2024-05-15 16:28:33.925478] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.712 16:28:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 16:28:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.227 software 00:07:27.227 00:07:27.227 real 0m0.296s 00:07:27.227 user 0m0.038s 00:07:27.227 sys 0m0.008s 00:07:27.227 16:28:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.227 16:28:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.227 ************************************ 00:07:27.227 END TEST accel_assign_opcode 00:07:27.227 ************************************ 00:07:27.227 16:28:34 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1651412 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1651412 ']' 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1651412 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1651412 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1651412' 00:07:27.227 killing process with pid 1651412 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@965 -- # kill 1651412 00:07:27.227 16:28:34 accel_rpc -- common/autotest_common.sh@970 -- # wait 1651412 00:07:27.484 00:07:27.484 real 0m1.077s 00:07:27.484 user 0m0.990s 00:07:27.484 sys 0m0.436s 00:07:27.484 16:28:34 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.484 16:28:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.484 ************************************ 00:07:27.484 END TEST accel_rpc 00:07:27.484 ************************************ 00:07:27.484 16:28:34 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.484 16:28:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.484 16:28:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.484 16:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.741 ************************************ 00:07:27.741 START TEST app_cmdline 00:07:27.741 ************************************ 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.741 * Looking for test storage... 00:07:27.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.741 16:28:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.741 16:28:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1651617 00:07:27.741 16:28:34 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.741 16:28:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1651617 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1651617 ']' 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.741 16:28:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.741 [2024-05-15 16:28:34.818902] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:27.741 [2024-05-15 16:28:34.818997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651617 ] 00:07:27.741 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.741 [2024-05-15 16:28:34.885447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.742 [2024-05-15 16:28:34.965660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.000 16:28:35 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.000 16:28:35 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:28.000 16:28:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.258 { 00:07:28.258 "version": "SPDK v24.05-pre git sha1 253cca4fc", 00:07:28.258 "fields": { 00:07:28.258 "major": 24, 00:07:28.258 "minor": 5, 00:07:28.258 "patch": 0, 00:07:28.258 "suffix": "-pre", 00:07:28.258 "commit": "253cca4fc" 00:07:28.258 } 00:07:28.258 } 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.258 16:28:35 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.258 16:28:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.258 16:28:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.258 16:28:35 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.515 16:28:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.515 16:28:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.515 16:28:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.515 16:28:35 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.515 request: 00:07:28.515 { 00:07:28.515 "method": "env_dpdk_get_mem_stats", 00:07:28.515 "req_id": 1 00:07:28.515 } 00:07:28.515 Got JSON-RPC error response 00:07:28.515 response: 00:07:28.515 { 00:07:28.515 "code": -32601, 00:07:28.515 "message": "Method not found" 00:07:28.515 } 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.773 16:28:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1651617 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1651617 ']' 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1651617 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1651617 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1651617' 00:07:28.773 killing process with pid 1651617 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@965 -- # kill 1651617 00:07:28.773 16:28:35 app_cmdline -- common/autotest_common.sh@970 -- # wait 1651617 00:07:29.032 00:07:29.032 real 0m1.462s 00:07:29.032 user 0m1.784s 00:07:29.032 sys 0m0.467s 00:07:29.032 16:28:36 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.032 16:28:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.032 ************************************ 00:07:29.032 END TEST app_cmdline 00:07:29.032 ************************************ 00:07:29.032 16:28:36 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.032 16:28:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.032 16:28:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.032 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.032 ************************************ 00:07:29.032 START TEST version 00:07:29.032 ************************************ 00:07:29.032 16:28:36 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.290 * Looking for test storage... 00:07:29.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.291 16:28:36 version -- app/version.sh@17 -- # get_header_version major 00:07:29.291 16:28:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # cut -f2 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.291 16:28:36 version -- app/version.sh@17 -- # major=24 00:07:29.291 16:28:36 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.291 16:28:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # cut -f2 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.291 16:28:36 version -- app/version.sh@18 -- # minor=5 00:07:29.291 16:28:36 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.291 16:28:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # cut -f2 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.291 16:28:36 version -- app/version.sh@19 -- # patch=0 00:07:29.291 16:28:36 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.291 16:28:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # cut -f2 00:07:29.291 16:28:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.291 16:28:36 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.291 16:28:36 version -- app/version.sh@22 -- # version=24.5 00:07:29.291 16:28:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.291 16:28:36 version -- app/version.sh@28 -- # version=24.5rc0 00:07:29.291 16:28:36 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.291 16:28:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.291 16:28:36 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:29.291 16:28:36 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:29.291 00:07:29.291 real 0m0.104s 00:07:29.291 user 0m0.059s 00:07:29.291 sys 0m0.066s 00:07:29.291 16:28:36 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.291 16:28:36 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.291 ************************************ 00:07:29.291 END TEST version 00:07:29.291 ************************************ 00:07:29.291 16:28:36 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@194 -- # uname -s 00:07:29.291 16:28:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:29.291 16:28:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.291 16:28:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.291 16:28:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:29.291 16:28:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.291 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.291 16:28:36 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:07:29.291 16:28:36 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:07:29.291 16:28:36 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.291 16:28:36 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:29.291 16:28:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.291 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.291 ************************************ 00:07:29.291 START TEST nvmf_tcp 00:07:29.291 ************************************ 00:07:29.291 16:28:36 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.291 * Looking for test storage... 00:07:29.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.291 16:28:36 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.291 16:28:36 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.291 16:28:36 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.291 16:28:36 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.291 16:28:36 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.291 16:28:36 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.291 16:28:36 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:29.291 16:28:36 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.291 16:28:36 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:29.292 16:28:36 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.292 16:28:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:29.292 16:28:36 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.292 16:28:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:29.292 16:28:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.292 16:28:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.292 ************************************ 00:07:29.292 START TEST nvmf_example 00:07:29.292 ************************************ 00:07:29.292 16:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.550 * Looking for test storage... 00:07:29.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.550 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.551 16:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:32.077 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:32.077 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:32.077 Found net devices under 0000:09:00.0: cvl_0_0 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:32.077 Found net devices under 0000:09:00.1: cvl_0_1 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.077 16:28:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.077 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:07:32.078 00:07:32.078 --- 10.0.0.2 ping statistics --- 00:07:32.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.078 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:07:32.078 00:07:32.078 --- 10.0.0.1 ping statistics --- 00:07:32.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.078 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1653822 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1653822 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1653822 ']' 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.078 16:28:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.011 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:33.268 16:28:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:33.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.466 Initializing NVMe Controllers 00:07:43.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:43.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:43.467 Initialization complete. Launching workers. 00:07:43.467 ======================================================== 00:07:43.467 Latency(us) 00:07:43.467 Device Information : IOPS MiB/s Average min max 00:07:43.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14855.23 58.03 4308.65 877.80 15256.74 00:07:43.467 ======================================================== 00:07:43.467 Total : 14855.23 58.03 4308.65 877.80 15256.74 00:07:43.467 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.467 rmmod nvme_tcp 00:07:43.467 rmmod nvme_fabrics 00:07:43.467 rmmod nvme_keyring 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1653822 ']' 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1653822 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1653822 ']' 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1653822 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1653822 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1653822' 00:07:43.467 killing process with pid 1653822 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1653822 00:07:43.467 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1653822 00:07:43.726 nvmf threads initialize successfully 00:07:43.726 bdev subsystem init successfully 00:07:43.726 created a nvmf target service 00:07:43.726 create targets's poll groups done 00:07:43.726 all subsystems of target started 00:07:43.726 nvmf target is running 00:07:43.726 all subsystems of target stopped 00:07:43.726 destroy targets's poll groups done 00:07:43.726 destroyed the nvmf target service 00:07:43.726 bdev subsystem finish successfully 00:07:43.726 nvmf threads destroy successfully 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.726 16:28:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.264 16:28:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.264 16:28:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:46.264 16:28:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.264 16:28:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.264 00:07:46.264 real 0m16.391s 00:07:46.264 user 0m45.597s 00:07:46.264 sys 0m3.577s 00:07:46.264 16:28:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.264 16:28:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.264 ************************************ 00:07:46.264 END TEST nvmf_example 00:07:46.264 ************************************ 00:07:46.264 16:28:52 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:46.264 16:28:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:46.264 16:28:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.264 16:28:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.264 ************************************ 00:07:46.264 START TEST nvmf_filesystem 00:07:46.264 ************************************ 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:46.264 * Looking for test storage... 00:07:46.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:46.264 16:28:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:46.264 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:46.265 #define SPDK_CONFIG_H 00:07:46.265 #define SPDK_CONFIG_APPS 1 00:07:46.265 #define SPDK_CONFIG_ARCH native 00:07:46.265 #undef SPDK_CONFIG_ASAN 00:07:46.265 #undef SPDK_CONFIG_AVAHI 00:07:46.265 #undef SPDK_CONFIG_CET 00:07:46.265 #define SPDK_CONFIG_COVERAGE 1 00:07:46.265 #define SPDK_CONFIG_CROSS_PREFIX 00:07:46.265 #undef SPDK_CONFIG_CRYPTO 00:07:46.265 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:46.265 #undef SPDK_CONFIG_CUSTOMOCF 00:07:46.265 #undef SPDK_CONFIG_DAOS 00:07:46.265 #define SPDK_CONFIG_DAOS_DIR 00:07:46.265 #define SPDK_CONFIG_DEBUG 1 00:07:46.265 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:46.265 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:46.265 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:46.265 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.265 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:46.265 #undef SPDK_CONFIG_DPDK_UADK 00:07:46.265 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:46.265 #define SPDK_CONFIG_EXAMPLES 1 00:07:46.265 #undef SPDK_CONFIG_FC 00:07:46.265 #define SPDK_CONFIG_FC_PATH 00:07:46.265 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:46.265 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:46.265 #undef SPDK_CONFIG_FUSE 00:07:46.265 #undef SPDK_CONFIG_FUZZER 00:07:46.265 #define SPDK_CONFIG_FUZZER_LIB 00:07:46.265 #undef SPDK_CONFIG_GOLANG 00:07:46.265 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:46.265 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:46.265 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:46.265 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:46.265 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:46.265 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:46.265 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:46.265 #define SPDK_CONFIG_IDXD 1 00:07:46.265 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:46.265 #undef SPDK_CONFIG_IPSEC_MB 00:07:46.265 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:46.265 #define SPDK_CONFIG_ISAL 1 00:07:46.265 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:46.265 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:46.265 #define SPDK_CONFIG_LIBDIR 00:07:46.265 #undef SPDK_CONFIG_LTO 00:07:46.265 #define SPDK_CONFIG_MAX_LCORES 00:07:46.265 #define SPDK_CONFIG_NVME_CUSE 1 00:07:46.265 #undef SPDK_CONFIG_OCF 00:07:46.265 #define SPDK_CONFIG_OCF_PATH 00:07:46.265 #define SPDK_CONFIG_OPENSSL_PATH 00:07:46.265 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:46.265 #define SPDK_CONFIG_PGO_DIR 00:07:46.265 #undef SPDK_CONFIG_PGO_USE 00:07:46.265 #define SPDK_CONFIG_PREFIX /usr/local 00:07:46.265 #undef SPDK_CONFIG_RAID5F 00:07:46.265 #undef SPDK_CONFIG_RBD 00:07:46.265 #define SPDK_CONFIG_RDMA 1 00:07:46.265 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:46.265 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:46.265 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:46.265 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:46.265 #define SPDK_CONFIG_SHARED 1 00:07:46.265 #undef SPDK_CONFIG_SMA 00:07:46.265 #define SPDK_CONFIG_TESTS 1 00:07:46.265 #undef SPDK_CONFIG_TSAN 00:07:46.265 #define SPDK_CONFIG_UBLK 1 00:07:46.265 #define SPDK_CONFIG_UBSAN 1 00:07:46.265 #undef SPDK_CONFIG_UNIT_TESTS 00:07:46.265 #undef SPDK_CONFIG_URING 00:07:46.265 #define SPDK_CONFIG_URING_PATH 00:07:46.265 #undef SPDK_CONFIG_URING_ZNS 00:07:46.265 #undef SPDK_CONFIG_USDT 00:07:46.265 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:46.265 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:46.265 #define SPDK_CONFIG_VFIO_USER 1 00:07:46.265 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:46.265 #define SPDK_CONFIG_VHOST 1 00:07:46.265 #define SPDK_CONFIG_VIRTIO 1 00:07:46.265 #undef SPDK_CONFIG_VTUNE 00:07:46.265 #define SPDK_CONFIG_VTUNE_DIR 00:07:46.265 #define SPDK_CONFIG_WERROR 1 00:07:46.265 #define SPDK_CONFIG_WPDK_DIR 00:07:46.265 #undef SPDK_CONFIG_XNVME 00:07:46.265 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.265 16:28:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:46.266 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:46.267 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1655646 ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1655646 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.CD2XXA 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.CD2XXA/tests/target /tmp/spdk.CD2XXA 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=964968448 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4319461376 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=47383490560 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=14611238912 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30992654336 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389961728 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8986624 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996529152 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=835584 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:46.268 * Looking for test storage... 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=47383490560 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=16825831424 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.268 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.269 16:28:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:48.798 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:48.798 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:48.798 Found net devices under 0000:09:00.0: cvl_0_0 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.798 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:48.799 Found net devices under 0000:09:00.1: cvl_0_1 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:48.799 00:07:48.799 --- 10.0.0.2 ping statistics --- 00:07:48.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.799 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:48.799 00:07:48.799 --- 10.0.0.1 ping statistics --- 00:07:48.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.799 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.799 ************************************ 00:07:48.799 START TEST nvmf_filesystem_no_in_capsule 00:07:48.799 ************************************ 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1657571 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1657571 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1657571 ']' 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.799 16:28:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.799 [2024-05-15 16:28:55.792677] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:07:48.799 [2024-05-15 16:28:55.792754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.799 [2024-05-15 16:28:55.871480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.799 [2024-05-15 16:28:55.961520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.799 [2024-05-15 16:28:55.961583] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.799 [2024-05-15 16:28:55.961599] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.799 [2024-05-15 16:28:55.961613] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.799 [2024-05-15 16:28:55.961624] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.799 [2024-05-15 16:28:55.961705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.799 [2024-05-15 16:28:55.961771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.799 [2024-05-15 16:28:55.961858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.799 [2024-05-15 16:28:55.961860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.057 [2024-05-15 16:28:56.114817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.057 Malloc1 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.057 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.314 [2024-05-15 16:28:56.301038] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:49.314 [2024-05-15 16:28:56.301377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:49.314 { 00:07:49.314 "name": "Malloc1", 00:07:49.314 "aliases": [ 00:07:49.314 "c2c31dae-2fe9-41a0-84d9-60d6320a70ad" 00:07:49.314 ], 00:07:49.314 "product_name": "Malloc disk", 00:07:49.314 "block_size": 512, 00:07:49.314 "num_blocks": 1048576, 00:07:49.314 "uuid": "c2c31dae-2fe9-41a0-84d9-60d6320a70ad", 00:07:49.314 "assigned_rate_limits": { 00:07:49.314 "rw_ios_per_sec": 0, 00:07:49.314 "rw_mbytes_per_sec": 0, 00:07:49.314 "r_mbytes_per_sec": 0, 00:07:49.314 "w_mbytes_per_sec": 0 00:07:49.314 }, 00:07:49.314 "claimed": true, 00:07:49.314 "claim_type": "exclusive_write", 00:07:49.314 "zoned": false, 00:07:49.314 "supported_io_types": { 00:07:49.314 "read": true, 00:07:49.314 "write": true, 00:07:49.314 "unmap": true, 00:07:49.314 "write_zeroes": true, 00:07:49.314 "flush": true, 00:07:49.314 "reset": true, 00:07:49.314 "compare": false, 00:07:49.314 "compare_and_write": false, 00:07:49.314 "abort": true, 00:07:49.314 "nvme_admin": false, 00:07:49.314 "nvme_io": false 00:07:49.314 }, 00:07:49.314 "memory_domains": [ 00:07:49.314 { 00:07:49.314 "dma_device_id": "system", 00:07:49.314 "dma_device_type": 1 00:07:49.314 }, 00:07:49.314 { 00:07:49.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.314 "dma_device_type": 2 00:07:49.314 } 00:07:49.314 ], 00:07:49.314 "driver_specific": {} 00:07:49.314 } 00:07:49.314 ]' 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:49.314 16:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.878 16:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.878 16:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:49.878 16:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.878 16:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:49.878 16:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.403 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.404 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.404 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.404 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.404 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.404 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.404 16:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:52.969 16:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.903 ************************************ 00:07:53.903 START TEST filesystem_ext4 00:07:53.903 ************************************ 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:53.903 16:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:53.903 mke2fs 1.46.5 (30-Dec-2021) 00:07:54.161 Discarding device blocks: 0/522240 done 00:07:54.161 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:54.161 Filesystem UUID: db2dd14f-52ae-4830-87b3-d6ad1a6f53f3 00:07:54.161 Superblock backups stored on blocks: 00:07:54.161 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:54.161 00:07:54.161 Allocating group tables: 0/64 done 00:07:54.161 Writing inode tables: 0/64 done 00:07:54.161 Creating journal (8192 blocks): done 00:07:55.243 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:07:55.243 00:07:55.243 16:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:55.243 16:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.822 16:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1657571 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.822 00:07:55.822 real 0m1.967s 00:07:55.822 user 0m0.017s 00:07:55.822 sys 0m0.032s 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.822 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:55.822 ************************************ 00:07:55.822 END TEST filesystem_ext4 00:07:55.822 ************************************ 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.088 ************************************ 00:07:56.088 START TEST filesystem_btrfs 00:07:56.088 ************************************ 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:56.088 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:56.346 btrfs-progs v6.6.2 00:07:56.346 See https://btrfs.readthedocs.io for more information. 00:07:56.346 00:07:56.346 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:56.346 NOTE: several default settings have changed in version 5.15, please make sure 00:07:56.346 this does not affect your deployments: 00:07:56.346 - DUP for metadata (-m dup) 00:07:56.346 - enabled no-holes (-O no-holes) 00:07:56.346 - enabled free-space-tree (-R free-space-tree) 00:07:56.346 00:07:56.346 Label: (null) 00:07:56.346 UUID: d9ca5de6-8f3e-42a0-83e2-cdf47112e6a8 00:07:56.346 Node size: 16384 00:07:56.346 Sector size: 4096 00:07:56.346 Filesystem size: 510.00MiB 00:07:56.346 Block group profiles: 00:07:56.346 Data: single 8.00MiB 00:07:56.346 Metadata: DUP 32.00MiB 00:07:56.346 System: DUP 8.00MiB 00:07:56.346 SSD detected: yes 00:07:56.346 Zoned device: no 00:07:56.346 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:56.346 Runtime features: free-space-tree 00:07:56.346 Checksum: crc32c 00:07:56.346 Number of devices: 1 00:07:56.346 Devices: 00:07:56.346 ID SIZE PATH 00:07:56.346 1 510.00MiB /dev/nvme0n1p1 00:07:56.346 00:07:56.346 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:56.346 16:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1657571 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.281 00:07:57.281 real 0m1.153s 00:07:57.281 user 0m0.019s 00:07:57.281 sys 0m0.034s 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:57.281 ************************************ 00:07:57.281 END TEST filesystem_btrfs 00:07:57.281 ************************************ 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.281 ************************************ 00:07:57.281 START TEST filesystem_xfs 00:07:57.281 ************************************ 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:57.281 16:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:57.281 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:57.281 = sectsz=512 attr=2, projid32bit=1 00:07:57.281 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:57.281 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:57.281 data = bsize=4096 blocks=130560, imaxpct=25 00:07:57.281 = sunit=0 swidth=0 blks 00:07:57.281 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:57.281 log =internal log bsize=4096 blocks=16384, version=2 00:07:57.281 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:57.281 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.216 Discarding blocks...Done. 00:07:58.216 16:29:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.216 16:29:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1657571 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.791 00:08:00.791 real 0m3.611s 00:08:00.791 user 0m0.013s 00:08:00.791 sys 0m0.036s 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:00.791 ************************************ 00:08:00.791 END TEST filesystem_xfs 00:08:00.791 ************************************ 00:08:00.791 16:29:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:01.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1657571 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1657571 ']' 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1657571 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:01.049 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1657571 00:08:01.307 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:01.307 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:01.307 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1657571' 00:08:01.307 killing process with pid 1657571 00:08:01.307 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1657571 00:08:01.307 [2024-05-15 16:29:08.290208] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:01.307 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1657571 00:08:01.565 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:01.565 00:08:01.566 real 0m13.005s 00:08:01.566 user 0m49.873s 00:08:01.566 sys 0m1.745s 00:08:01.566 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.566 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.566 ************************************ 00:08:01.566 END TEST nvmf_filesystem_no_in_capsule 00:08:01.566 ************************************ 00:08:01.566 16:29:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:01.566 16:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:01.566 16:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.566 16:29:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.824 ************************************ 00:08:01.824 START TEST nvmf_filesystem_in_capsule 00:08:01.824 ************************************ 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1659316 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1659316 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1659316 ']' 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.824 16:29:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.824 [2024-05-15 16:29:08.856329] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:08:01.824 [2024-05-15 16:29:08.856414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.824 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.824 [2024-05-15 16:29:08.936910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.824 [2024-05-15 16:29:09.024097] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.824 [2024-05-15 16:29:09.024147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.824 [2024-05-15 16:29:09.024171] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.824 [2024-05-15 16:29:09.024183] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.824 [2024-05-15 16:29:09.024192] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.824 [2024-05-15 16:29:09.024320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.824 [2024-05-15 16:29:09.024377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.824 [2024-05-15 16:29:09.024443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.824 [2024-05-15 16:29:09.024445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.083 [2024-05-15 16:29:09.184006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.083 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 Malloc1 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 [2024-05-15 16:29:09.367517] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:02.341 [2024-05-15 16:29:09.367824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:02.341 { 00:08:02.341 "name": "Malloc1", 00:08:02.341 "aliases": [ 00:08:02.341 "a23d46a4-6d2a-48e2-827c-ac719dbfb924" 00:08:02.341 ], 00:08:02.341 "product_name": "Malloc disk", 00:08:02.341 "block_size": 512, 00:08:02.341 "num_blocks": 1048576, 00:08:02.341 "uuid": "a23d46a4-6d2a-48e2-827c-ac719dbfb924", 00:08:02.341 "assigned_rate_limits": { 00:08:02.341 "rw_ios_per_sec": 0, 00:08:02.341 "rw_mbytes_per_sec": 0, 00:08:02.341 "r_mbytes_per_sec": 0, 00:08:02.341 "w_mbytes_per_sec": 0 00:08:02.341 }, 00:08:02.341 "claimed": true, 00:08:02.341 "claim_type": "exclusive_write", 00:08:02.341 "zoned": false, 00:08:02.341 "supported_io_types": { 00:08:02.341 "read": true, 00:08:02.341 "write": true, 00:08:02.341 "unmap": true, 00:08:02.341 "write_zeroes": true, 00:08:02.341 "flush": true, 00:08:02.341 "reset": true, 00:08:02.341 "compare": false, 00:08:02.341 "compare_and_write": false, 00:08:02.341 "abort": true, 00:08:02.341 "nvme_admin": false, 00:08:02.341 "nvme_io": false 00:08:02.341 }, 00:08:02.341 "memory_domains": [ 00:08:02.341 { 00:08:02.341 "dma_device_id": "system", 00:08:02.341 "dma_device_type": 1 00:08:02.341 }, 00:08:02.341 { 00:08:02.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.341 "dma_device_type": 2 00:08:02.341 } 00:08:02.341 ], 00:08:02.341 "driver_specific": {} 00:08:02.341 } 00:08:02.341 ]' 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:02.341 16:29:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.906 16:29:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.906 16:29:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:02.906 16:29:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.906 16:29:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:02.906 16:29:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:04.804 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:04.804 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:04.804 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:05.062 16:29:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:05.996 16:29:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.929 ************************************ 00:08:06.929 START TEST filesystem_in_capsule_ext4 00:08:06.929 ************************************ 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:06.929 16:29:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:06.929 mke2fs 1.46.5 (30-Dec-2021) 00:08:07.187 Discarding device blocks: 0/522240 done 00:08:07.187 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:07.187 Filesystem UUID: 220e49fa-c3b9-4556-a11b-9f0d1f8ca413 00:08:07.187 Superblock backups stored on blocks: 00:08:07.187 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:07.187 00:08:07.187 Allocating group tables: 0/64 done 00:08:07.187 Writing inode tables: 0/64 done 00:08:08.561 Creating journal (8192 blocks): done 00:08:08.561 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.561 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1659316 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.561 00:08:08.561 real 0m1.523s 00:08:08.561 user 0m0.022s 00:08:08.561 sys 0m0.030s 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.561 ************************************ 00:08:08.561 END TEST filesystem_in_capsule_ext4 00:08:08.561 ************************************ 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.561 ************************************ 00:08:08.561 START TEST filesystem_in_capsule_btrfs 00:08:08.561 ************************************ 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:08.561 16:29:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.128 btrfs-progs v6.6.2 00:08:09.128 See https://btrfs.readthedocs.io for more information. 00:08:09.128 00:08:09.128 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.128 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.128 this does not affect your deployments: 00:08:09.128 - DUP for metadata (-m dup) 00:08:09.128 - enabled no-holes (-O no-holes) 00:08:09.128 - enabled free-space-tree (-R free-space-tree) 00:08:09.128 00:08:09.128 Label: (null) 00:08:09.128 UUID: 762a03b6-0910-47ab-9453-9d2c5c78401f 00:08:09.128 Node size: 16384 00:08:09.128 Sector size: 4096 00:08:09.128 Filesystem size: 510.00MiB 00:08:09.128 Block group profiles: 00:08:09.128 Data: single 8.00MiB 00:08:09.128 Metadata: DUP 32.00MiB 00:08:09.128 System: DUP 8.00MiB 00:08:09.128 SSD detected: yes 00:08:09.128 Zoned device: no 00:08:09.128 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.128 Runtime features: free-space-tree 00:08:09.128 Checksum: crc32c 00:08:09.128 Number of devices: 1 00:08:09.128 Devices: 00:08:09.128 ID SIZE PATH 00:08:09.128 1 510.00MiB /dev/nvme0n1p1 00:08:09.128 00:08:09.128 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:09.128 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1659316 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.693 00:08:09.693 real 0m1.126s 00:08:09.693 user 0m0.015s 00:08:09.693 sys 0m0.046s 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 ************************************ 00:08:09.693 END TEST filesystem_in_capsule_btrfs 00:08:09.693 ************************************ 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 ************************************ 00:08:09.693 START TEST filesystem_in_capsule_xfs 00:08:09.693 ************************************ 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:09.693 16:29:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.952 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.952 = sectsz=512 attr=2, projid32bit=1 00:08:09.952 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.952 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.952 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.952 = sunit=0 swidth=0 blks 00:08:09.952 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.952 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.952 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.952 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.884 Discarding blocks...Done. 00:08:10.884 16:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:10.884 16:29:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1659316 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.781 00:08:12.781 real 0m2.960s 00:08:12.781 user 0m0.016s 00:08:12.781 sys 0m0.037s 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.781 ************************************ 00:08:12.781 END TEST filesystem_in_capsule_xfs 00:08:12.781 ************************************ 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.781 16:29:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.781 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.781 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:12.781 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1659316 00:08:12.781 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1659316 ']' 00:08:12.782 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1659316 00:08:12.782 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:12.782 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:12.782 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1659316 00:08:13.039 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.040 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.040 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1659316' 00:08:13.040 killing process with pid 1659316 00:08:13.040 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1659316 00:08:13.040 [2024-05-15 16:29:20.033055] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:13.040 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1659316 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:13.299 00:08:13.299 real 0m11.669s 00:08:13.299 user 0m44.747s 00:08:13.299 sys 0m1.633s 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.299 ************************************ 00:08:13.299 END TEST nvmf_filesystem_in_capsule 00:08:13.299 ************************************ 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.299 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.299 rmmod nvme_tcp 00:08:13.299 rmmod nvme_fabrics 00:08:13.559 rmmod nvme_keyring 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.559 16:29:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.465 16:29:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.465 00:08:15.465 real 0m29.658s 00:08:15.465 user 1m35.654s 00:08:15.465 sys 0m5.346s 00:08:15.465 16:29:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.465 16:29:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.465 ************************************ 00:08:15.465 END TEST nvmf_filesystem 00:08:15.465 ************************************ 00:08:15.465 16:29:22 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.465 16:29:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:15.465 16:29:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.465 16:29:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.465 ************************************ 00:08:15.465 START TEST nvmf_target_discovery 00:08:15.465 ************************************ 00:08:15.465 16:29:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.724 * Looking for test storage... 00:08:15.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.724 16:29:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:18.305 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:18.305 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:18.305 Found net devices under 0000:09:00.0: cvl_0_0 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:18.305 Found net devices under 0000:09:00.1: cvl_0_1 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:08:18.305 00:08:18.305 --- 10.0.0.2 ping statistics --- 00:08:18.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.305 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:18.305 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:18.306 00:08:18.306 --- 10.0.0.1 ping statistics --- 00:08:18.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.306 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1663157 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1663157 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1663157 ']' 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:18.306 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.306 [2024-05-15 16:29:25.347763] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:08:18.306 [2024-05-15 16:29:25.347848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.306 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.306 [2024-05-15 16:29:25.425610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.306 [2024-05-15 16:29:25.511993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.306 [2024-05-15 16:29:25.512051] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.306 [2024-05-15 16:29:25.512065] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.306 [2024-05-15 16:29:25.512076] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.306 [2024-05-15 16:29:25.512086] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.306 [2024-05-15 16:29:25.514236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.306 [2024-05-15 16:29:25.514268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.306 [2024-05-15 16:29:25.514289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.306 [2024-05-15 16:29:25.514293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 [2024-05-15 16:29:25.662759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 Null1 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 [2024-05-15 16:29:25.702808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:18.564 [2024-05-15 16:29:25.703076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 Null2 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 Null3 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 Null4 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.564 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:08:18.823 00:08:18.823 Discovery Log Number of Records 6, Generation counter 6 00:08:18.823 =====Discovery Log Entry 0====== 00:08:18.823 trtype: tcp 00:08:18.823 adrfam: ipv4 00:08:18.823 subtype: current discovery subsystem 00:08:18.823 treq: not required 00:08:18.823 portid: 0 00:08:18.823 trsvcid: 4420 00:08:18.823 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.823 traddr: 10.0.0.2 00:08:18.823 eflags: explicit discovery connections, duplicate discovery information 00:08:18.823 sectype: none 00:08:18.823 =====Discovery Log Entry 1====== 00:08:18.823 trtype: tcp 00:08:18.823 adrfam: ipv4 00:08:18.823 subtype: nvme subsystem 00:08:18.823 treq: not required 00:08:18.823 portid: 0 00:08:18.823 trsvcid: 4420 00:08:18.823 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:18.823 traddr: 10.0.0.2 00:08:18.823 eflags: none 00:08:18.823 sectype: none 00:08:18.823 =====Discovery Log Entry 2====== 00:08:18.823 trtype: tcp 00:08:18.823 adrfam: ipv4 00:08:18.823 subtype: nvme subsystem 00:08:18.823 treq: not required 00:08:18.823 portid: 0 00:08:18.823 trsvcid: 4420 00:08:18.823 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:18.823 traddr: 10.0.0.2 00:08:18.823 eflags: none 00:08:18.823 sectype: none 00:08:18.823 =====Discovery Log Entry 3====== 00:08:18.823 trtype: tcp 00:08:18.823 adrfam: ipv4 00:08:18.823 subtype: nvme subsystem 00:08:18.823 treq: not required 00:08:18.823 portid: 0 00:08:18.823 trsvcid: 4420 00:08:18.823 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:18.823 traddr: 10.0.0.2 00:08:18.823 eflags: none 00:08:18.823 sectype: none 00:08:18.823 =====Discovery Log Entry 4====== 00:08:18.823 trtype: tcp 00:08:18.823 adrfam: ipv4 00:08:18.823 subtype: nvme subsystem 00:08:18.823 treq: not required 00:08:18.823 portid: 0 00:08:18.823 trsvcid: 4420 00:08:18.823 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:18.823 traddr: 10.0.0.2 00:08:18.823 eflags: none 00:08:18.823 sectype: none 00:08:18.823 =====Discovery Log Entry 5====== 00:08:18.823 trtype: tcp 00:08:18.823 adrfam: ipv4 00:08:18.823 subtype: discovery subsystem referral 00:08:18.823 treq: not required 00:08:18.823 portid: 0 00:08:18.823 trsvcid: 4430 00:08:18.823 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.823 traddr: 10.0.0.2 00:08:18.823 eflags: none 00:08:18.823 sectype: none 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:18.823 Perform nvmf subsystem discovery via RPC 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.823 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 [ 00:08:18.823 { 00:08:18.823 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:18.823 "subtype": "Discovery", 00:08:18.823 "listen_addresses": [ 00:08:18.823 { 00:08:18.823 "trtype": "TCP", 00:08:18.823 "adrfam": "IPv4", 00:08:18.823 "traddr": "10.0.0.2", 00:08:18.823 "trsvcid": "4420" 00:08:18.823 } 00:08:18.823 ], 00:08:18.823 "allow_any_host": true, 00:08:18.823 "hosts": [] 00:08:18.823 }, 00:08:18.823 { 00:08:18.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.823 "subtype": "NVMe", 00:08:18.823 "listen_addresses": [ 00:08:18.823 { 00:08:18.823 "trtype": "TCP", 00:08:18.823 "adrfam": "IPv4", 00:08:18.823 "traddr": "10.0.0.2", 00:08:18.823 "trsvcid": "4420" 00:08:18.823 } 00:08:18.823 ], 00:08:18.823 "allow_any_host": true, 00:08:18.823 "hosts": [], 00:08:18.823 "serial_number": "SPDK00000000000001", 00:08:18.823 "model_number": "SPDK bdev Controller", 00:08:18.823 "max_namespaces": 32, 00:08:18.823 "min_cntlid": 1, 00:08:18.823 "max_cntlid": 65519, 00:08:18.823 "namespaces": [ 00:08:18.823 { 00:08:18.823 "nsid": 1, 00:08:18.823 "bdev_name": "Null1", 00:08:18.823 "name": "Null1", 00:08:18.823 "nguid": "584AB76739E64E73ABF306DDB7D81BFD", 00:08:18.823 "uuid": "584ab767-39e6-4e73-abf3-06ddb7d81bfd" 00:08:18.823 } 00:08:18.823 ] 00:08:18.823 }, 00:08:18.823 { 00:08:18.823 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:18.823 "subtype": "NVMe", 00:08:18.823 "listen_addresses": [ 00:08:18.823 { 00:08:18.823 "trtype": "TCP", 00:08:18.823 "adrfam": "IPv4", 00:08:18.823 "traddr": "10.0.0.2", 00:08:18.823 "trsvcid": "4420" 00:08:18.823 } 00:08:18.823 ], 00:08:18.823 "allow_any_host": true, 00:08:18.823 "hosts": [], 00:08:18.823 "serial_number": "SPDK00000000000002", 00:08:18.823 "model_number": "SPDK bdev Controller", 00:08:18.823 "max_namespaces": 32, 00:08:18.823 "min_cntlid": 1, 00:08:18.823 "max_cntlid": 65519, 00:08:18.823 "namespaces": [ 00:08:18.823 { 00:08:18.823 "nsid": 1, 00:08:18.823 "bdev_name": "Null2", 00:08:18.823 "name": "Null2", 00:08:18.823 "nguid": "19BA15CBCF4E46C3B710CF8FF7BF0442", 00:08:18.823 "uuid": "19ba15cb-cf4e-46c3-b710-cf8ff7bf0442" 00:08:18.823 } 00:08:18.823 ] 00:08:18.823 }, 00:08:18.823 { 00:08:18.823 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:18.823 "subtype": "NVMe", 00:08:18.823 "listen_addresses": [ 00:08:18.823 { 00:08:18.823 "trtype": "TCP", 00:08:18.823 "adrfam": "IPv4", 00:08:18.823 "traddr": "10.0.0.2", 00:08:18.823 "trsvcid": "4420" 00:08:18.823 } 00:08:18.823 ], 00:08:18.823 "allow_any_host": true, 00:08:18.823 "hosts": [], 00:08:18.823 "serial_number": "SPDK00000000000003", 00:08:18.823 "model_number": "SPDK bdev Controller", 00:08:18.823 "max_namespaces": 32, 00:08:18.823 "min_cntlid": 1, 00:08:18.823 "max_cntlid": 65519, 00:08:18.823 "namespaces": [ 00:08:18.823 { 00:08:18.823 "nsid": 1, 00:08:18.824 "bdev_name": "Null3", 00:08:18.824 "name": "Null3", 00:08:18.824 "nguid": "66924C643E9C4E6F9A4C86F6A2F591BA", 00:08:18.824 "uuid": "66924c64-3e9c-4e6f-9a4c-86f6a2f591ba" 00:08:18.824 } 00:08:18.824 ] 00:08:18.824 }, 00:08:18.824 { 00:08:18.824 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:18.824 "subtype": "NVMe", 00:08:18.824 "listen_addresses": [ 00:08:18.824 { 00:08:18.824 "trtype": "TCP", 00:08:18.824 "adrfam": "IPv4", 00:08:18.824 "traddr": "10.0.0.2", 00:08:18.824 "trsvcid": "4420" 00:08:18.824 } 00:08:18.824 ], 00:08:18.824 "allow_any_host": true, 00:08:18.824 "hosts": [], 00:08:18.824 "serial_number": "SPDK00000000000004", 00:08:18.824 "model_number": "SPDK bdev Controller", 00:08:18.824 "max_namespaces": 32, 00:08:18.824 "min_cntlid": 1, 00:08:18.824 "max_cntlid": 65519, 00:08:18.824 "namespaces": [ 00:08:18.824 { 00:08:18.824 "nsid": 1, 00:08:18.824 "bdev_name": "Null4", 00:08:18.824 "name": "Null4", 00:08:18.824 "nguid": "6602FE3D82784607B176C3F6CC675561", 00:08:18.824 "uuid": "6602fe3d-8278-4607-b176-c3f6cc675561" 00:08:18.824 } 00:08:18.824 ] 00:08:18.824 } 00:08:18.824 ] 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.824 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.082 rmmod nvme_tcp 00:08:19.082 rmmod nvme_fabrics 00:08:19.082 rmmod nvme_keyring 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1663157 ']' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1663157 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1663157 ']' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1663157 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1663157 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1663157' 00:08:19.082 killing process with pid 1663157 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1663157 00:08:19.082 [2024-05-15 16:29:26.181554] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:19.082 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1663157 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.340 16:29:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.241 16:29:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.241 00:08:21.241 real 0m5.810s 00:08:21.241 user 0m4.488s 00:08:21.241 sys 0m2.102s 00:08:21.241 16:29:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.241 16:29:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.241 ************************************ 00:08:21.241 END TEST nvmf_target_discovery 00:08:21.241 ************************************ 00:08:21.500 16:29:28 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:21.500 16:29:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:21.500 16:29:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.500 16:29:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.500 ************************************ 00:08:21.500 START TEST nvmf_referrals 00:08:21.500 ************************************ 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:21.500 * Looking for test storage... 00:08:21.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.500 16:29:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:24.029 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:24.029 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.029 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:24.030 Found net devices under 0000:09:00.0: cvl_0_0 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:24.030 Found net devices under 0000:09:00.1: cvl_0_1 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:08:24.030 00:08:24.030 --- 10.0.0.2 ping statistics --- 00:08:24.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.030 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:08:24.030 00:08:24.030 --- 10.0.0.1 ping statistics --- 00:08:24.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.030 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1665546 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1665546 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1665546 ']' 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:24.030 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 [2024-05-15 16:29:31.251468] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:08:24.030 [2024-05-15 16:29:31.251579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.289 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.289 [2024-05-15 16:29:31.326424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.289 [2024-05-15 16:29:31.413567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.289 [2024-05-15 16:29:31.413627] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.289 [2024-05-15 16:29:31.413641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.289 [2024-05-15 16:29:31.413651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.289 [2024-05-15 16:29:31.413661] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.289 [2024-05-15 16:29:31.413742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.289 [2024-05-15 16:29:31.413807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.289 [2024-05-15 16:29:31.413875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.289 [2024-05-15 16:29:31.413873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 [2024-05-15 16:29:31.568999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 [2024-05-15 16:29:31.580969] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:24.547 [2024-05-15 16:29:31.581311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.547 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.805 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.063 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.320 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.591 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.592 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.859 rmmod nvme_tcp 00:08:25.859 rmmod nvme_fabrics 00:08:25.859 rmmod nvme_keyring 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1665546 ']' 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1665546 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1665546 ']' 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1665546 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1665546 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1665546' 00:08:25.859 killing process with pid 1665546 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1665546 00:08:25.859 [2024-05-15 16:29:32.899899] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:25.859 16:29:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1665546 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.119 16:29:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.024 16:29:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.024 00:08:28.024 real 0m6.643s 00:08:28.024 user 0m8.285s 00:08:28.024 sys 0m2.259s 00:08:28.024 16:29:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.024 16:29:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.024 ************************************ 00:08:28.024 END TEST nvmf_referrals 00:08:28.024 ************************************ 00:08:28.024 16:29:35 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:28.024 16:29:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:28.024 16:29:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.024 16:29:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.024 ************************************ 00:08:28.024 START TEST nvmf_connect_disconnect 00:08:28.024 ************************************ 00:08:28.024 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:28.283 * Looking for test storage... 00:08:28.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.283 16:29:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:30.813 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:30.813 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:30.813 Found net devices under 0000:09:00.0: cvl_0_0 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:30.813 Found net devices under 0000:09:00.1: cvl_0_1 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:30.813 00:08:30.813 --- 10.0.0.2 ping statistics --- 00:08:30.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.813 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:30.813 00:08:30.813 --- 10.0.0.1 ping statistics --- 00:08:30.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.813 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1668128 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.813 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1668128 00:08:30.814 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1668128 ']' 00:08:30.814 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.814 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.814 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.814 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.814 16:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.814 [2024-05-15 16:29:38.036325] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:08:30.814 [2024-05-15 16:29:38.036422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.072 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.072 [2024-05-15 16:29:38.123183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.072 [2024-05-15 16:29:38.219109] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.072 [2024-05-15 16:29:38.219176] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.072 [2024-05-15 16:29:38.219192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.072 [2024-05-15 16:29:38.219206] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.072 [2024-05-15 16:29:38.219225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.072 [2024-05-15 16:29:38.221244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.072 [2024-05-15 16:29:38.221295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.072 [2024-05-15 16:29:38.225279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.072 [2024-05-15 16:29:38.225284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.330 [2024-05-15 16:29:38.386909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.330 [2024-05-15 16:29:38.443767] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:31.330 [2024-05-15 16:29:38.444096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:31.330 16:29:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:33.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.961 rmmod nvme_tcp 00:12:17.961 rmmod nvme_fabrics 00:12:17.961 rmmod nvme_keyring 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1668128 ']' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1668128 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1668128 ']' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1668128 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1668128 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1668128' 00:12:17.961 killing process with pid 1668128 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1668128 00:12:17.961 [2024-05-15 16:33:24.695416] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1668128 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.961 16:33:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.936 16:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.936 00:12:19.936 real 3m51.772s 00:12:19.936 user 14m40.982s 00:12:19.936 sys 0m31.059s 00:12:19.936 16:33:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.936 16:33:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.936 ************************************ 00:12:19.936 END TEST nvmf_connect_disconnect 00:12:19.936 ************************************ 00:12:19.936 16:33:27 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.936 16:33:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.936 16:33:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.936 16:33:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.936 ************************************ 00:12:19.936 START TEST nvmf_multitarget 00:12:19.936 ************************************ 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.936 * Looking for test storage... 00:12:19.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.936 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.937 16:33:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:22.462 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:22.462 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.462 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:22.462 Found net devices under 0000:09:00.0: cvl_0_0 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:22.463 Found net devices under 0000:09:00.1: cvl_0_1 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.463 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:12:22.721 00:12:22.721 --- 10.0.0.2 ping statistics --- 00:12:22.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.721 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:22.721 00:12:22.721 --- 10.0.0.1 ping statistics --- 00:12:22.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.721 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1699518 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1699518 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1699518 ']' 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:22.721 16:33:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.721 [2024-05-15 16:33:29.790831] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:12:22.721 [2024-05-15 16:33:29.790912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.721 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.721 [2024-05-15 16:33:29.866032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.721 [2024-05-15 16:33:29.948491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.721 [2024-05-15 16:33:29.948542] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.721 [2024-05-15 16:33:29.948566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.721 [2024-05-15 16:33:29.948578] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.979 [2024-05-15 16:33:29.948588] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.979 [2024-05-15 16:33:29.948644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.979 [2024-05-15 16:33:29.948702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.979 [2024-05-15 16:33:29.948775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.979 [2024-05-15 16:33:29.948775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:22.979 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:23.236 "nvmf_tgt_1" 00:12:23.236 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:23.236 "nvmf_tgt_2" 00:12:23.236 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.236 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:23.493 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:23.493 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:23.493 true 00:12:23.493 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:23.750 true 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.750 rmmod nvme_tcp 00:12:23.750 rmmod nvme_fabrics 00:12:23.750 rmmod nvme_keyring 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1699518 ']' 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1699518 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1699518 ']' 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1699518 00:12:23.750 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1699518 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1699518' 00:12:23.751 killing process with pid 1699518 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1699518 00:12:23.751 16:33:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1699518 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.008 16:33:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.009 16:33:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.537 16:33:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.537 00:12:26.537 real 0m6.177s 00:12:26.537 user 0m6.470s 00:12:26.537 sys 0m2.266s 00:12:26.537 16:33:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.537 16:33:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.537 ************************************ 00:12:26.537 END TEST nvmf_multitarget 00:12:26.537 ************************************ 00:12:26.537 16:33:33 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.537 16:33:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:26.537 16:33:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.537 16:33:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.537 ************************************ 00:12:26.537 START TEST nvmf_rpc 00:12:26.537 ************************************ 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.537 * Looking for test storage... 00:12:26.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.537 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.538 16:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:29.183 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:29.183 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:29.183 Found net devices under 0000:09:00.0: cvl_0_0 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.183 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:29.184 Found net devices under 0000:09:00.1: cvl_0_1 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:12:29.184 00:12:29.184 --- 10.0.0.2 ping statistics --- 00:12:29.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.184 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:29.184 00:12:29.184 --- 10.0.0.1 ping statistics --- 00:12:29.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.184 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:29.184 16:33:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1702028 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1702028 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1702028 ']' 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.184 [2024-05-15 16:33:36.048974] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:12:29.184 [2024-05-15 16:33:36.049055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.184 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.184 [2024-05-15 16:33:36.122866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.184 [2024-05-15 16:33:36.209135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.184 [2024-05-15 16:33:36.209208] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.184 [2024-05-15 16:33:36.209228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.184 [2024-05-15 16:33:36.209240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.184 [2024-05-15 16:33:36.209263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.184 [2024-05-15 16:33:36.209319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.184 [2024-05-15 16:33:36.209376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.184 [2024-05-15 16:33:36.209441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.184 [2024-05-15 16:33:36.209443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:29.184 "tick_rate": 2700000000, 00:12:29.184 "poll_groups": [ 00:12:29.184 { 00:12:29.184 "name": "nvmf_tgt_poll_group_000", 00:12:29.184 "admin_qpairs": 0, 00:12:29.184 "io_qpairs": 0, 00:12:29.184 "current_admin_qpairs": 0, 00:12:29.184 "current_io_qpairs": 0, 00:12:29.184 "pending_bdev_io": 0, 00:12:29.184 "completed_nvme_io": 0, 00:12:29.184 "transports": [] 00:12:29.184 }, 00:12:29.184 { 00:12:29.184 "name": "nvmf_tgt_poll_group_001", 00:12:29.184 "admin_qpairs": 0, 00:12:29.184 "io_qpairs": 0, 00:12:29.184 "current_admin_qpairs": 0, 00:12:29.184 "current_io_qpairs": 0, 00:12:29.184 "pending_bdev_io": 0, 00:12:29.184 "completed_nvme_io": 0, 00:12:29.184 "transports": [] 00:12:29.184 }, 00:12:29.184 { 00:12:29.184 "name": "nvmf_tgt_poll_group_002", 00:12:29.184 "admin_qpairs": 0, 00:12:29.184 "io_qpairs": 0, 00:12:29.184 "current_admin_qpairs": 0, 00:12:29.184 "current_io_qpairs": 0, 00:12:29.184 "pending_bdev_io": 0, 00:12:29.184 "completed_nvme_io": 0, 00:12:29.184 "transports": [] 00:12:29.184 }, 00:12:29.184 { 00:12:29.184 "name": "nvmf_tgt_poll_group_003", 00:12:29.184 "admin_qpairs": 0, 00:12:29.184 "io_qpairs": 0, 00:12:29.184 "current_admin_qpairs": 0, 00:12:29.184 "current_io_qpairs": 0, 00:12:29.184 "pending_bdev_io": 0, 00:12:29.184 "completed_nvme_io": 0, 00:12:29.184 "transports": [] 00:12:29.184 } 00:12:29.184 ] 00:12:29.184 }' 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:29.184 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 [2024-05-15 16:33:36.465323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:29.443 "tick_rate": 2700000000, 00:12:29.443 "poll_groups": [ 00:12:29.443 { 00:12:29.443 "name": "nvmf_tgt_poll_group_000", 00:12:29.443 "admin_qpairs": 0, 00:12:29.443 "io_qpairs": 0, 00:12:29.443 "current_admin_qpairs": 0, 00:12:29.443 "current_io_qpairs": 0, 00:12:29.443 "pending_bdev_io": 0, 00:12:29.443 "completed_nvme_io": 0, 00:12:29.443 "transports": [ 00:12:29.443 { 00:12:29.443 "trtype": "TCP" 00:12:29.443 } 00:12:29.443 ] 00:12:29.443 }, 00:12:29.443 { 00:12:29.443 "name": "nvmf_tgt_poll_group_001", 00:12:29.443 "admin_qpairs": 0, 00:12:29.443 "io_qpairs": 0, 00:12:29.443 "current_admin_qpairs": 0, 00:12:29.443 "current_io_qpairs": 0, 00:12:29.443 "pending_bdev_io": 0, 00:12:29.443 "completed_nvme_io": 0, 00:12:29.443 "transports": [ 00:12:29.443 { 00:12:29.443 "trtype": "TCP" 00:12:29.443 } 00:12:29.443 ] 00:12:29.443 }, 00:12:29.443 { 00:12:29.443 "name": "nvmf_tgt_poll_group_002", 00:12:29.443 "admin_qpairs": 0, 00:12:29.443 "io_qpairs": 0, 00:12:29.443 "current_admin_qpairs": 0, 00:12:29.443 "current_io_qpairs": 0, 00:12:29.443 "pending_bdev_io": 0, 00:12:29.443 "completed_nvme_io": 0, 00:12:29.443 "transports": [ 00:12:29.443 { 00:12:29.443 "trtype": "TCP" 00:12:29.443 } 00:12:29.443 ] 00:12:29.443 }, 00:12:29.443 { 00:12:29.443 "name": "nvmf_tgt_poll_group_003", 00:12:29.443 "admin_qpairs": 0, 00:12:29.443 "io_qpairs": 0, 00:12:29.443 "current_admin_qpairs": 0, 00:12:29.443 "current_io_qpairs": 0, 00:12:29.443 "pending_bdev_io": 0, 00:12:29.443 "completed_nvme_io": 0, 00:12:29.443 "transports": [ 00:12:29.443 { 00:12:29.443 "trtype": "TCP" 00:12:29.443 } 00:12:29.443 ] 00:12:29.443 } 00:12:29.443 ] 00:12:29.443 }' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 Malloc1 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.443 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.443 [2024-05-15 16:33:36.626304] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:29.443 [2024-05-15 16:33:36.626647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:29.444 [2024-05-15 16:33:36.649127] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:29.444 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.444 could not add new controller: failed to write to nvme-fabrics device 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.444 16:33:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.009 16:33:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.009 16:33:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:30.009 16:33:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.009 16:33:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:30.009 16:33:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.534 [2024-05-15 16:33:39.329117] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:32.534 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.534 could not add new controller: failed to write to nvme-fabrics device 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.534 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.792 16:33:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.792 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:32.792 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.792 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:32.792 16:33:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:34.688 16:33:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:34.689 16:33:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:34.689 16:33:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.689 16:33:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:34.689 16:33:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.689 16:33:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:34.689 16:33:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.946 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.947 [2024-05-15 16:33:42.049963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.947 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.511 16:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.511 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:35.511 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.511 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:35.511 16:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:37.408 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:37.408 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:37.408 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 [2024-05-15 16:33:44.736712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.666 16:33:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.232 16:33:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.232 16:33:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.232 16:33:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.232 16:33:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.232 16:33:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.184 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.442 [2024-05-15 16:33:47.415542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.442 16:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.007 16:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.007 16:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:41.007 16:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.007 16:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.007 16:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:42.903 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 [2024-05-15 16:33:50.187531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.161 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.726 16:33:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.726 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:43.726 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.726 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:43.726 16:33:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:45.622 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:45.622 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:45.622 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.890 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 [2024-05-15 16:33:52.949708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.891 16:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.462 16:33:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.462 16:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:46.462 16:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.462 16:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:46.462 16:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:48.360 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 [2024-05-15 16:33:55.680672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 [2024-05-15 16:33:55.728722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 [2024-05-15 16:33:55.776863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.618 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 [2024-05-15 16:33:55.825042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.619 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.876 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 [2024-05-15 16:33:55.873239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:48.877 "tick_rate": 2700000000, 00:12:48.877 "poll_groups": [ 00:12:48.877 { 00:12:48.877 "name": "nvmf_tgt_poll_group_000", 00:12:48.877 "admin_qpairs": 2, 00:12:48.877 "io_qpairs": 84, 00:12:48.877 "current_admin_qpairs": 0, 00:12:48.877 "current_io_qpairs": 0, 00:12:48.877 "pending_bdev_io": 0, 00:12:48.877 "completed_nvme_io": 179, 00:12:48.877 "transports": [ 00:12:48.877 { 00:12:48.877 "trtype": "TCP" 00:12:48.877 } 00:12:48.877 ] 00:12:48.877 }, 00:12:48.877 { 00:12:48.877 "name": "nvmf_tgt_poll_group_001", 00:12:48.877 "admin_qpairs": 2, 00:12:48.877 "io_qpairs": 84, 00:12:48.877 "current_admin_qpairs": 0, 00:12:48.877 "current_io_qpairs": 0, 00:12:48.877 "pending_bdev_io": 0, 00:12:48.877 "completed_nvme_io": 185, 00:12:48.877 "transports": [ 00:12:48.877 { 00:12:48.877 "trtype": "TCP" 00:12:48.877 } 00:12:48.877 ] 00:12:48.877 }, 00:12:48.877 { 00:12:48.877 "name": "nvmf_tgt_poll_group_002", 00:12:48.877 "admin_qpairs": 1, 00:12:48.877 "io_qpairs": 84, 00:12:48.877 "current_admin_qpairs": 0, 00:12:48.877 "current_io_qpairs": 0, 00:12:48.877 "pending_bdev_io": 0, 00:12:48.877 "completed_nvme_io": 184, 00:12:48.877 "transports": [ 00:12:48.877 { 00:12:48.877 "trtype": "TCP" 00:12:48.877 } 00:12:48.877 ] 00:12:48.877 }, 00:12:48.877 { 00:12:48.877 "name": "nvmf_tgt_poll_group_003", 00:12:48.877 "admin_qpairs": 2, 00:12:48.877 "io_qpairs": 84, 00:12:48.877 "current_admin_qpairs": 0, 00:12:48.877 "current_io_qpairs": 0, 00:12:48.877 "pending_bdev_io": 0, 00:12:48.877 "completed_nvme_io": 138, 00:12:48.877 "transports": [ 00:12:48.877 { 00:12:48.877 "trtype": "TCP" 00:12:48.877 } 00:12:48.877 ] 00:12:48.877 } 00:12:48.877 ] 00:12:48.877 }' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:48.877 16:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.877 rmmod nvme_tcp 00:12:48.877 rmmod nvme_fabrics 00:12:48.877 rmmod nvme_keyring 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1702028 ']' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1702028 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1702028 ']' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1702028 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1702028 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1702028' 00:12:48.877 killing process with pid 1702028 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1702028 00:12:48.877 [2024-05-15 16:33:56.090913] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:48.877 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1702028 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.135 16:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.666 16:33:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:51.666 00:12:51.666 real 0m25.085s 00:12:51.666 user 1m19.761s 00:12:51.666 sys 0m4.247s 00:12:51.666 16:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.666 16:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.666 ************************************ 00:12:51.666 END TEST nvmf_rpc 00:12:51.666 ************************************ 00:12:51.666 16:33:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:51.666 16:33:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:51.666 16:33:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.666 16:33:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:51.666 ************************************ 00:12:51.666 START TEST nvmf_invalid 00:12:51.666 ************************************ 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:51.666 * Looking for test storage... 00:12:51.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.666 16:33:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.667 16:33:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.196 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:54.197 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:54.197 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:54.197 Found net devices under 0000:09:00.0: cvl_0_0 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:54.197 Found net devices under 0000:09:00.1: cvl_0_1 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:12:54.197 00:12:54.197 --- 10.0.0.2 ping statistics --- 00:12:54.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.197 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:12:54.197 00:12:54.197 --- 10.0.0.1 ping statistics --- 00:12:54.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.197 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1706805 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1706805 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1706805 ']' 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.197 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.198 [2024-05-15 16:34:01.308866] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:12:54.198 [2024-05-15 16:34:01.308944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.198 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.198 [2024-05-15 16:34:01.388396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.455 [2024-05-15 16:34:01.480386] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.455 [2024-05-15 16:34:01.480448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.455 [2024-05-15 16:34:01.480464] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.455 [2024-05-15 16:34:01.480478] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.455 [2024-05-15 16:34:01.480490] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.455 [2024-05-15 16:34:01.480575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.455 [2024-05-15 16:34:01.480629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.455 [2024-05-15 16:34:01.480690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.455 [2024-05-15 16:34:01.480693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.455 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.455 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:54.455 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.455 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.455 16:34:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.455 16:34:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.456 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:54.456 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10889 00:12:54.713 [2024-05-15 16:34:01.867775] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.713 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:54.713 { 00:12:54.713 "nqn": "nqn.2016-06.io.spdk:cnode10889", 00:12:54.713 "tgt_name": "foobar", 00:12:54.713 "method": "nvmf_create_subsystem", 00:12:54.713 "req_id": 1 00:12:54.713 } 00:12:54.713 Got JSON-RPC error response 00:12:54.713 response: 00:12:54.713 { 00:12:54.713 "code": -32603, 00:12:54.713 "message": "Unable to find target foobar" 00:12:54.713 }' 00:12:54.713 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:54.713 { 00:12:54.713 "nqn": "nqn.2016-06.io.spdk:cnode10889", 00:12:54.713 "tgt_name": "foobar", 00:12:54.713 "method": "nvmf_create_subsystem", 00:12:54.713 "req_id": 1 00:12:54.713 } 00:12:54.713 Got JSON-RPC error response 00:12:54.713 response: 00:12:54.713 { 00:12:54.713 "code": -32603, 00:12:54.713 "message": "Unable to find target foobar" 00:12:54.713 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.713 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.713 16:34:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21549 00:12:54.970 [2024-05-15 16:34:02.128656] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21549: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.970 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:54.970 { 00:12:54.970 "nqn": "nqn.2016-06.io.spdk:cnode21549", 00:12:54.970 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.970 "method": "nvmf_create_subsystem", 00:12:54.970 "req_id": 1 00:12:54.970 } 00:12:54.970 Got JSON-RPC error response 00:12:54.970 response: 00:12:54.970 { 00:12:54.970 "code": -32602, 00:12:54.970 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.970 }' 00:12:54.970 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:54.970 { 00:12:54.970 "nqn": "nqn.2016-06.io.spdk:cnode21549", 00:12:54.970 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.970 "method": "nvmf_create_subsystem", 00:12:54.970 "req_id": 1 00:12:54.970 } 00:12:54.970 Got JSON-RPC error response 00:12:54.970 response: 00:12:54.970 { 00:12:54.970 "code": -32602, 00:12:54.970 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.970 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.970 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.970 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16056 00:12:55.227 [2024-05-15 16:34:02.369426] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16056: invalid model number 'SPDK_Controller' 00:12:55.227 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:55.227 { 00:12:55.227 "nqn": "nqn.2016-06.io.spdk:cnode16056", 00:12:55.227 "model_number": "SPDK_Controller\u001f", 00:12:55.227 "method": "nvmf_create_subsystem", 00:12:55.227 "req_id": 1 00:12:55.227 } 00:12:55.227 Got JSON-RPC error response 00:12:55.227 response: 00:12:55.227 { 00:12:55.227 "code": -32602, 00:12:55.227 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.227 }' 00:12:55.227 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:55.227 { 00:12:55.227 "nqn": "nqn.2016-06.io.spdk:cnode16056", 00:12:55.227 "model_number": "SPDK_Controller\u001f", 00:12:55.227 "method": "nvmf_create_subsystem", 00:12:55.227 "req_id": 1 00:12:55.227 } 00:12:55.227 Got JSON-RPC error response 00:12:55.227 response: 00:12:55.227 { 00:12:55.227 "code": -32602, 00:12:55.227 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.227 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.227 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:55.227 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:55.227 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.228 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.485 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:12:55.486 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kRLnj+PQ#[iG?n,oI{s+r' 00:12:55.486 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'kRLnj+PQ#[iG?n,oI{s+r' nqn.2016-06.io.spdk:cnode13352 00:12:55.486 [2024-05-15 16:34:02.690485] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13352: invalid serial number 'kRLnj+PQ#[iG?n,oI{s+r' 00:12:55.486 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:55.486 { 00:12:55.486 "nqn": "nqn.2016-06.io.spdk:cnode13352", 00:12:55.486 "serial_number": "kRLnj+PQ#[iG?n,oI{s+r", 00:12:55.486 "method": "nvmf_create_subsystem", 00:12:55.486 "req_id": 1 00:12:55.486 } 00:12:55.486 Got JSON-RPC error response 00:12:55.486 response: 00:12:55.486 { 00:12:55.486 "code": -32602, 00:12:55.486 "message": "Invalid SN kRLnj+PQ#[iG?n,oI{s+r" 00:12:55.486 }' 00:12:55.486 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:55.486 { 00:12:55.486 "nqn": "nqn.2016-06.io.spdk:cnode13352", 00:12:55.486 "serial_number": "kRLnj+PQ#[iG?n,oI{s+r", 00:12:55.486 "method": "nvmf_create_subsystem", 00:12:55.486 "req_id": 1 00:12:55.486 } 00:12:55.486 Got JSON-RPC error response 00:12:55.486 response: 00:12:55.486 { 00:12:55.486 "code": -32602, 00:12:55.486 "message": "Invalid SN kRLnj+PQ#[iG?n,oI{s+r" 00:12:55.486 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.744 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '^*9;~=7QIeMT>'\''AL&chn+sn#-)eTi=An-)~fcDyx,' 00:12:55.745 16:34:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '^*9;~=7QIeMT>'\''AL&chn+sn#-)eTi=An-)~fcDyx,' nqn.2016-06.io.spdk:cnode9101 00:12:56.003 [2024-05-15 16:34:03.087782] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9101: invalid model number '^*9;~=7QIeMT>'AL&chn+sn#-)eTi=An-)~fcDyx,' 00:12:56.003 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:56.003 { 00:12:56.003 "nqn": "nqn.2016-06.io.spdk:cnode9101", 00:12:56.003 "model_number": "^*9;~=7QIeMT>'\''AL&chn+sn#-)eTi=An-)~fcDyx,", 00:12:56.003 "method": "nvmf_create_subsystem", 00:12:56.003 "req_id": 1 00:12:56.003 } 00:12:56.003 Got JSON-RPC error response 00:12:56.003 response: 00:12:56.003 { 00:12:56.003 "code": -32602, 00:12:56.003 "message": "Invalid MN ^*9;~=7QIeMT>'\''AL&chn+sn#-)eTi=An-)~fcDyx," 00:12:56.003 }' 00:12:56.003 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:56.003 { 00:12:56.003 "nqn": "nqn.2016-06.io.spdk:cnode9101", 00:12:56.003 "model_number": "^*9;~=7QIeMT>'AL&chn+sn#-)eTi=An-)~fcDyx,", 00:12:56.003 "method": "nvmf_create_subsystem", 00:12:56.003 "req_id": 1 00:12:56.003 } 00:12:56.003 Got JSON-RPC error response 00:12:56.003 response: 00:12:56.003 { 00:12:56.003 "code": -32602, 00:12:56.003 "message": "Invalid MN ^*9;~=7QIeMT>'AL&chn+sn#-)eTi=An-)~fcDyx," 00:12:56.003 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:56.003 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:56.260 [2024-05-15 16:34:03.352714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.260 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:56.517 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:56.517 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:56.517 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:56.517 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:56.517 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:56.774 [2024-05-15 16:34:03.850341] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:56.774 [2024-05-15 16:34:03.850441] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:56.774 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:56.774 { 00:12:56.774 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.774 "listen_address": { 00:12:56.774 "trtype": "tcp", 00:12:56.774 "traddr": "", 00:12:56.774 "trsvcid": "4421" 00:12:56.774 }, 00:12:56.774 "method": "nvmf_subsystem_remove_listener", 00:12:56.774 "req_id": 1 00:12:56.774 } 00:12:56.774 Got JSON-RPC error response 00:12:56.774 response: 00:12:56.774 { 00:12:56.774 "code": -32602, 00:12:56.774 "message": "Invalid parameters" 00:12:56.774 }' 00:12:56.774 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:56.774 { 00:12:56.774 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.774 "listen_address": { 00:12:56.774 "trtype": "tcp", 00:12:56.774 "traddr": "", 00:12:56.774 "trsvcid": "4421" 00:12:56.774 }, 00:12:56.774 "method": "nvmf_subsystem_remove_listener", 00:12:56.774 "req_id": 1 00:12:56.774 } 00:12:56.774 Got JSON-RPC error response 00:12:56.774 response: 00:12:56.774 { 00:12:56.774 "code": -32602, 00:12:56.774 "message": "Invalid parameters" 00:12:56.774 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:56.774 16:34:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20114 -i 0 00:12:57.031 [2024-05-15 16:34:04.095125] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20114: invalid cntlid range [0-65519] 00:12:57.031 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:57.031 { 00:12:57.031 "nqn": "nqn.2016-06.io.spdk:cnode20114", 00:12:57.031 "min_cntlid": 0, 00:12:57.031 "method": "nvmf_create_subsystem", 00:12:57.031 "req_id": 1 00:12:57.031 } 00:12:57.031 Got JSON-RPC error response 00:12:57.031 response: 00:12:57.031 { 00:12:57.031 "code": -32602, 00:12:57.031 "message": "Invalid cntlid range [0-65519]" 00:12:57.031 }' 00:12:57.031 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:57.031 { 00:12:57.031 "nqn": "nqn.2016-06.io.spdk:cnode20114", 00:12:57.031 "min_cntlid": 0, 00:12:57.031 "method": "nvmf_create_subsystem", 00:12:57.031 "req_id": 1 00:12:57.031 } 00:12:57.031 Got JSON-RPC error response 00:12:57.031 response: 00:12:57.031 { 00:12:57.031 "code": -32602, 00:12:57.031 "message": "Invalid cntlid range [0-65519]" 00:12:57.031 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.031 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9003 -i 65520 00:12:57.288 [2024-05-15 16:34:04.355957] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9003: invalid cntlid range [65520-65519] 00:12:57.288 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:57.288 { 00:12:57.288 "nqn": "nqn.2016-06.io.spdk:cnode9003", 00:12:57.288 "min_cntlid": 65520, 00:12:57.288 "method": "nvmf_create_subsystem", 00:12:57.288 "req_id": 1 00:12:57.288 } 00:12:57.288 Got JSON-RPC error response 00:12:57.288 response: 00:12:57.288 { 00:12:57.288 "code": -32602, 00:12:57.288 "message": "Invalid cntlid range [65520-65519]" 00:12:57.288 }' 00:12:57.288 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:57.288 { 00:12:57.288 "nqn": "nqn.2016-06.io.spdk:cnode9003", 00:12:57.288 "min_cntlid": 65520, 00:12:57.288 "method": "nvmf_create_subsystem", 00:12:57.288 "req_id": 1 00:12:57.288 } 00:12:57.288 Got JSON-RPC error response 00:12:57.288 response: 00:12:57.288 { 00:12:57.288 "code": -32602, 00:12:57.288 "message": "Invalid cntlid range [65520-65519]" 00:12:57.288 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.288 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13918 -I 0 00:12:57.546 [2024-05-15 16:34:04.596829] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13918: invalid cntlid range [1-0] 00:12:57.546 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:57.546 { 00:12:57.546 "nqn": "nqn.2016-06.io.spdk:cnode13918", 00:12:57.546 "max_cntlid": 0, 00:12:57.546 "method": "nvmf_create_subsystem", 00:12:57.546 "req_id": 1 00:12:57.546 } 00:12:57.546 Got JSON-RPC error response 00:12:57.546 response: 00:12:57.546 { 00:12:57.546 "code": -32602, 00:12:57.546 "message": "Invalid cntlid range [1-0]" 00:12:57.546 }' 00:12:57.546 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:57.546 { 00:12:57.546 "nqn": "nqn.2016-06.io.spdk:cnode13918", 00:12:57.546 "max_cntlid": 0, 00:12:57.546 "method": "nvmf_create_subsystem", 00:12:57.546 "req_id": 1 00:12:57.546 } 00:12:57.546 Got JSON-RPC error response 00:12:57.546 response: 00:12:57.546 { 00:12:57.546 "code": -32602, 00:12:57.546 "message": "Invalid cntlid range [1-0]" 00:12:57.546 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.546 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21285 -I 65520 00:12:57.803 [2024-05-15 16:34:04.841635] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21285: invalid cntlid range [1-65520] 00:12:57.803 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:57.803 { 00:12:57.803 "nqn": "nqn.2016-06.io.spdk:cnode21285", 00:12:57.803 "max_cntlid": 65520, 00:12:57.803 "method": "nvmf_create_subsystem", 00:12:57.803 "req_id": 1 00:12:57.803 } 00:12:57.803 Got JSON-RPC error response 00:12:57.803 response: 00:12:57.803 { 00:12:57.803 "code": -32602, 00:12:57.803 "message": "Invalid cntlid range [1-65520]" 00:12:57.803 }' 00:12:57.803 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:57.803 { 00:12:57.803 "nqn": "nqn.2016-06.io.spdk:cnode21285", 00:12:57.803 "max_cntlid": 65520, 00:12:57.803 "method": "nvmf_create_subsystem", 00:12:57.803 "req_id": 1 00:12:57.803 } 00:12:57.803 Got JSON-RPC error response 00:12:57.803 response: 00:12:57.803 { 00:12:57.803 "code": -32602, 00:12:57.803 "message": "Invalid cntlid range [1-65520]" 00:12:57.803 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.803 16:34:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8875 -i 6 -I 5 00:12:58.061 [2024-05-15 16:34:05.082356] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8875: invalid cntlid range [6-5] 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:58.061 { 00:12:58.061 "nqn": "nqn.2016-06.io.spdk:cnode8875", 00:12:58.061 "min_cntlid": 6, 00:12:58.061 "max_cntlid": 5, 00:12:58.061 "method": "nvmf_create_subsystem", 00:12:58.061 "req_id": 1 00:12:58.061 } 00:12:58.061 Got JSON-RPC error response 00:12:58.061 response: 00:12:58.061 { 00:12:58.061 "code": -32602, 00:12:58.061 "message": "Invalid cntlid range [6-5]" 00:12:58.061 }' 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:58.061 { 00:12:58.061 "nqn": "nqn.2016-06.io.spdk:cnode8875", 00:12:58.061 "min_cntlid": 6, 00:12:58.061 "max_cntlid": 5, 00:12:58.061 "method": "nvmf_create_subsystem", 00:12:58.061 "req_id": 1 00:12:58.061 } 00:12:58.061 Got JSON-RPC error response 00:12:58.061 response: 00:12:58.061 { 00:12:58.061 "code": -32602, 00:12:58.061 "message": "Invalid cntlid range [6-5]" 00:12:58.061 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:58.061 { 00:12:58.061 "name": "foobar", 00:12:58.061 "method": "nvmf_delete_target", 00:12:58.061 "req_id": 1 00:12:58.061 } 00:12:58.061 Got JSON-RPC error response 00:12:58.061 response: 00:12:58.061 { 00:12:58.061 "code": -32602, 00:12:58.061 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:58.061 }' 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:58.061 { 00:12:58.061 "name": "foobar", 00:12:58.061 "method": "nvmf_delete_target", 00:12:58.061 "req_id": 1 00:12:58.061 } 00:12:58.061 Got JSON-RPC error response 00:12:58.061 response: 00:12:58.061 { 00:12:58.061 "code": -32602, 00:12:58.061 "message": "The specified target doesn't exist, cannot delete it." 00:12:58.061 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.061 rmmod nvme_tcp 00:12:58.061 rmmod nvme_fabrics 00:12:58.061 rmmod nvme_keyring 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1706805 ']' 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1706805 00:12:58.061 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 1706805 ']' 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 1706805 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1706805 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1706805' 00:12:58.351 killing process with pid 1706805 00:12:58.351 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 1706805 00:12:58.352 [2024-05-15 16:34:05.318448] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 1706805 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.352 16:34:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.884 16:34:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.884 00:13:00.884 real 0m9.154s 00:13:00.884 user 0m19.897s 00:13:00.884 sys 0m2.880s 00:13:00.884 16:34:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.884 16:34:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.884 ************************************ 00:13:00.884 END TEST nvmf_invalid 00:13:00.884 ************************************ 00:13:00.884 16:34:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:00.884 16:34:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.884 16:34:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.884 16:34:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.884 ************************************ 00:13:00.884 START TEST nvmf_abort 00:13:00.884 ************************************ 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:00.884 * Looking for test storage... 00:13:00.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.884 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.885 16:34:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:03.413 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.413 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:03.414 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:03.414 Found net devices under 0000:09:00.0: cvl_0_0 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:03.414 Found net devices under 0000:09:00.1: cvl_0_1 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:13:03.414 00:13:03.414 --- 10.0.0.2 ping statistics --- 00:13:03.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.414 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:13:03.414 00:13:03.414 --- 10.0.0.1 ping statistics --- 00:13:03.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.414 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1709735 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1709735 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1709735 ']' 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.414 [2024-05-15 16:34:10.320854] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:13:03.414 [2024-05-15 16:34:10.320936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.414 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.414 [2024-05-15 16:34:10.405506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.414 [2024-05-15 16:34:10.501521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.414 [2024-05-15 16:34:10.501590] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.414 [2024-05-15 16:34:10.501606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.414 [2024-05-15 16:34:10.501620] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.414 [2024-05-15 16:34:10.501632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.414 [2024-05-15 16:34:10.501727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.414 [2024-05-15 16:34:10.501779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.414 [2024-05-15 16:34:10.501782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.414 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 [2024-05-15 16:34:10.653076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 Malloc0 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 Delay0 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 [2024-05-15 16:34:10.726993] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:03.672 [2024-05-15 16:34:10.727336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.672 16:34:10 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:03.672 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.672 [2024-05-15 16:34:10.792394] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:06.198 Initializing NVMe Controllers 00:13:06.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:06.198 controller IO queue size 128 less than required 00:13:06.198 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:06.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:06.198 Initialization complete. Launching workers. 00:13:06.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33530 00:13:06.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33591, failed to submit 62 00:13:06.198 success 33534, unsuccess 57, failed 0 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.198 rmmod nvme_tcp 00:13:06.198 rmmod nvme_fabrics 00:13:06.198 rmmod nvme_keyring 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1709735 ']' 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1709735 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1709735 ']' 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1709735 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1709735 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1709735' 00:13:06.198 killing process with pid 1709735 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1709735 00:13:06.198 [2024-05-15 16:34:12.907795] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:06.198 16:34:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1709735 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.198 16:34:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.098 16:34:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.098 00:13:08.098 real 0m7.563s 00:13:08.098 user 0m10.246s 00:13:08.098 sys 0m2.802s 00:13:08.098 16:34:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.098 16:34:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.098 ************************************ 00:13:08.098 END TEST nvmf_abort 00:13:08.098 ************************************ 00:13:08.098 16:34:15 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:08.098 16:34:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:08.098 16:34:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.098 16:34:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.098 ************************************ 00:13:08.098 START TEST nvmf_ns_hotplug_stress 00:13:08.098 ************************************ 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:08.098 * Looking for test storage... 00:13:08.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.098 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.357 16:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:10.887 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:10.887 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:10.887 Found net devices under 0000:09:00.0: cvl_0_0 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:10.887 Found net devices under 0000:09:00.1: cvl_0_1 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.887 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.888 16:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:13:10.888 00:13:10.888 --- 10.0.0.2 ping statistics --- 00:13:10.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.888 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:13:10.888 00:13:10.888 --- 10.0.0.1 ping statistics --- 00:13:10.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.888 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1712362 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1712362 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1712362 ']' 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:10.888 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.146 [2024-05-15 16:34:18.143681] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:13:11.146 [2024-05-15 16:34:18.143776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.146 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.146 [2024-05-15 16:34:18.225129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.146 [2024-05-15 16:34:18.308759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.146 [2024-05-15 16:34:18.308823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.146 [2024-05-15 16:34:18.308836] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.146 [2024-05-15 16:34:18.308848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.146 [2024-05-15 16:34:18.308857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.146 [2024-05-15 16:34:18.308947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.146 [2024-05-15 16:34:18.309012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.146 [2024-05-15 16:34:18.309015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:11.404 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:11.661 [2024-05-15 16:34:18.662666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.661 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.918 16:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.176 [2024-05-15 16:34:19.161399] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:12.176 [2024-05-15 16:34:19.161693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.176 16:34:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:12.433 16:34:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:12.691 Malloc0 00:13:12.691 16:34:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:12.948 Delay0 00:13:12.948 16:34:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.206 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:13.463 NULL1 00:13:13.463 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:13.720 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1712659 00:13:13.720 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:13.720 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:13.720 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.720 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.720 16:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.976 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:13.976 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:14.233 true 00:13:14.233 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:14.233 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.489 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.746 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:14.746 16:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:15.004 true 00:13:15.004 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:15.004 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.261 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.518 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:15.518 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:15.775 true 00:13:15.775 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:15.775 16:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.144 Read completed with error (sct=0, sc=11) 00:13:17.144 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.144 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:17.144 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:17.400 true 00:13:17.400 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:17.400 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.658 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.923 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:17.923 16:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:18.224 true 00:13:18.224 16:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:18.224 16:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.156 16:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.156 16:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:19.156 16:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:19.413 true 00:13:19.413 16:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:19.413 16:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.670 16:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.929 16:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:19.929 16:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:20.187 true 00:13:20.187 16:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:20.187 16:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.119 16:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.376 16:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:21.376 16:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:21.633 true 00:13:21.633 16:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:21.633 16:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.890 16:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.147 16:34:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:22.147 16:34:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:22.147 true 00:13:22.404 16:34:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:22.404 16:34:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.335 16:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.335 16:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:23.335 16:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:23.593 true 00:13:23.593 16:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:23.593 16:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.850 16:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.107 16:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:24.107 16:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:24.364 true 00:13:24.364 16:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:24.364 16:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.295 16:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.552 16:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:25.552 16:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:25.809 true 00:13:25.809 16:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:25.809 16:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.066 16:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.323 16:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:26.323 16:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:26.580 true 00:13:26.580 16:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:26.580 16:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.512 16:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.769 16:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:27.769 16:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:28.026 true 00:13:28.026 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:28.026 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.283 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.540 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:28.540 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:28.797 true 00:13:28.797 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:28.797 16:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.729 16:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.987 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:29.987 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:30.245 true 00:13:30.245 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:30.245 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.501 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.758 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:30.758 16:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:31.016 true 00:13:31.016 16:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:31.016 16:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.946 16:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.204 16:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:32.204 16:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:32.461 true 00:13:32.461 16:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:32.461 16:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.718 16:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.975 16:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:32.975 16:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:33.233 true 00:13:33.233 16:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:33.233 16:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.193 16:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.193 16:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:34.193 16:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:34.458 true 00:13:34.458 16:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:34.458 16:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.715 16:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.973 16:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:34.973 16:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:35.230 true 00:13:35.230 16:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:35.230 16:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.161 16:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.417 16:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:36.417 16:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:36.673 true 00:13:36.673 16:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:36.673 16:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.929 16:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.185 16:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:37.185 16:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:37.441 true 00:13:37.441 16:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:37.441 16:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.371 16:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.628 16:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:38.628 16:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:38.886 true 00:13:38.886 16:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:38.886 16:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.143 16:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.143 16:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:39.143 16:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:39.400 true 00:13:39.400 16:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:39.400 16:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.334 16:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.591 16:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:40.591 16:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:40.849 true 00:13:40.849 16:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:40.849 16:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.106 16:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.363 16:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:41.363 16:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:41.620 true 00:13:41.620 16:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:41.620 16:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.552 16:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.810 16:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:42.810 16:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:43.072 true 00:13:43.072 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:43.072 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.330 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.587 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:43.587 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:43.843 true 00:13:43.843 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:43.843 16:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.101 Initializing NVMe Controllers 00:13:44.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.101 Controller IO queue size 128, less than required. 00:13:44.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.101 Controller IO queue size 128, less than required. 00:13:44.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:44.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:44.101 Initialization complete. Launching workers. 00:13:44.101 ======================================================== 00:13:44.101 Latency(us) 00:13:44.101 Device Information : IOPS MiB/s Average min max 00:13:44.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 561.12 0.27 109412.61 2907.67 1011727.19 00:13:44.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9605.38 4.69 13288.38 3056.43 369827.48 00:13:44.101 ======================================================== 00:13:44.101 Total : 10166.49 4.96 18593.76 2907.67 1011727.19 00:13:44.101 00:13:44.101 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.358 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:44.358 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:44.616 true 00:13:44.616 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712659 00:13:44.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1712659) - No such process 00:13:44.616 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1712659 00:13:44.616 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.874 16:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.131 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:45.131 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:45.131 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:45.131 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.131 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:45.388 null0 00:13:45.388 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.388 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.388 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:45.646 null1 00:13:45.646 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.646 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.646 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:45.904 null2 00:13:45.904 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.904 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.904 16:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:46.161 null3 00:13:46.161 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.161 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.161 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:46.418 null4 00:13:46.418 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.418 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.418 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:46.675 null5 00:13:46.675 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.675 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.675 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:46.932 null6 00:13:46.932 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.932 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.932 16:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:47.190 null7 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.190 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1716708 1716709 1716711 1716713 1716715 1716717 1716719 1716721 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.191 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.448 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.705 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.962 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.962 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.962 16:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.962 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.962 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:47.962 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.962 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.962 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.220 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.477 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.477 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.477 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.478 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.478 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.478 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.478 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.478 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.735 16:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.998 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.307 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.308 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.565 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.822 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.823 16:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.080 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.338 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.596 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.597 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.908 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.166 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.423 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.423 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.423 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.423 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.423 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.423 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.424 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.681 16:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.939 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.196 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.196 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.196 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.196 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.196 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.197 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.197 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.197 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.454 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.454 rmmod nvme_tcp 00:13:52.713 rmmod nvme_fabrics 00:13:52.713 rmmod nvme_keyring 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1712362 ']' 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1712362 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1712362 ']' 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1712362 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1712362 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1712362' 00:13:52.713 killing process with pid 1712362 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1712362 00:13:52.713 [2024-05-15 16:34:59.763498] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:52.713 16:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1712362 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.972 16:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.874 16:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.874 00:13:54.874 real 0m46.789s 00:13:54.874 user 3m31.356s 00:13:54.874 sys 0m16.548s 00:13:54.874 16:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.874 16:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.874 ************************************ 00:13:54.874 END TEST nvmf_ns_hotplug_stress 00:13:54.874 ************************************ 00:13:54.874 16:35:02 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:54.874 16:35:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:54.874 16:35:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:54.874 16:35:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.132 ************************************ 00:13:55.132 START TEST nvmf_connect_stress 00:13:55.132 ************************************ 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:55.132 * Looking for test storage... 00:13:55.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.132 16:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:57.662 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:57.662 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:57.662 Found net devices under 0000:09:00.0: cvl_0_0 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:57.662 Found net devices under 0000:09:00.1: cvl_0_1 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:13:57.662 00:13:57.662 --- 10.0.0.2 ping statistics --- 00:13:57.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.662 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:57.662 00:13:57.662 --- 10.0.0.1 ping statistics --- 00:13:57.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.662 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:57.662 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1719762 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1719762 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1719762 ']' 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:57.663 16:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.663 [2024-05-15 16:35:04.808195] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:13:57.663 [2024-05-15 16:35:04.808296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.663 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.663 [2024-05-15 16:35:04.888551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.921 [2024-05-15 16:35:04.978704] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.921 [2024-05-15 16:35:04.978756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.921 [2024-05-15 16:35:04.978781] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.921 [2024-05-15 16:35:04.978795] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.921 [2024-05-15 16:35:04.978808] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.921 [2024-05-15 16:35:04.978924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.921 [2024-05-15 16:35:04.979016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.921 [2024-05-15 16:35:04.979018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 [2024-05-15 16:35:05.124158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.921 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 [2024-05-15 16:35:05.141234] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:58.179 [2024-05-15 16:35:05.156393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.179 NULL1 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1719904 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.179 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.436 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.436 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:13:58.436 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.436 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.436 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.693 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.693 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:13:58.693 16:35:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.693 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.693 16:35:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.258 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.259 16:35:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:13:59.259 16:35:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.259 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.259 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.516 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.516 16:35:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:13:59.516 16:35:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.516 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.516 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.773 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.773 16:35:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:13:59.773 16:35:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.773 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.773 16:35:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.030 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.030 16:35:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:00.030 16:35:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.030 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.030 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.288 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.288 16:35:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:00.288 16:35:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.288 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.288 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.852 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.852 16:35:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:00.852 16:35:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.852 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.852 16:35:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.111 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.111 16:35:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:01.111 16:35:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.111 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.111 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.368 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.368 16:35:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:01.368 16:35:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.368 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.368 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.625 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.625 16:35:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:01.625 16:35:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.625 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.625 16:35:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.883 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.883 16:35:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:01.883 16:35:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.883 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.883 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.447 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.447 16:35:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:02.447 16:35:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.447 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.447 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.704 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.704 16:35:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:02.704 16:35:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.704 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.704 16:35:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.962 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.962 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:02.962 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.962 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.962 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.219 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.219 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:03.219 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.219 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.219 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.476 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.476 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:03.476 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.476 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.476 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.040 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.040 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:04.040 16:35:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.040 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.040 16:35:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.297 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.297 16:35:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:04.297 16:35:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.297 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.297 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.555 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.555 16:35:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:04.555 16:35:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.555 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.555 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.812 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.812 16:35:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:04.812 16:35:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.812 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.812 16:35:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.070 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.070 16:35:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:05.070 16:35:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.070 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.070 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.635 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.635 16:35:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:05.635 16:35:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.635 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.635 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.923 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.923 16:35:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:05.923 16:35:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.923 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.923 16:35:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.181 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.181 16:35:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:06.181 16:35:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.181 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.181 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.437 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.437 16:35:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:06.437 16:35:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.437 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.437 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.695 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.695 16:35:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:06.695 16:35:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.695 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.695 16:35:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.260 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.260 16:35:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:07.260 16:35:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.260 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.260 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.517 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.517 16:35:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:07.517 16:35:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.517 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.517 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.775 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.775 16:35:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:07.775 16:35:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.775 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.775 16:35:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.032 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.032 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:08.032 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.032 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.032 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.032 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1719904 00:14:08.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1719904) - No such process 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1719904 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.289 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.289 rmmod nvme_tcp 00:14:08.546 rmmod nvme_fabrics 00:14:08.546 rmmod nvme_keyring 00:14:08.546 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.546 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:08.546 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:08.546 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1719762 ']' 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1719762 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1719762 ']' 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1719762 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1719762 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1719762' 00:14:08.547 killing process with pid 1719762 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1719762 00:14:08.547 [2024-05-15 16:35:15.599616] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:08.547 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1719762 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.805 16:35:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.703 16:35:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.703 00:14:10.703 real 0m15.771s 00:14:10.703 user 0m38.495s 00:14:10.703 sys 0m6.222s 00:14:10.703 16:35:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.703 16:35:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.703 ************************************ 00:14:10.703 END TEST nvmf_connect_stress 00:14:10.703 ************************************ 00:14:10.703 16:35:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:10.703 16:35:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.703 16:35:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.703 16:35:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.961 ************************************ 00:14:10.961 START TEST nvmf_fused_ordering 00:14:10.961 ************************************ 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:10.961 * Looking for test storage... 00:14:10.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.961 16:35:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.961 16:35:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.490 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:13.491 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:13.491 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:13.491 Found net devices under 0000:09:00.0: cvl_0_0 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:13.491 Found net devices under 0000:09:00.1: cvl_0_1 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:13.491 00:14:13.491 --- 10.0.0.2 ping statistics --- 00:14:13.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.491 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:13.491 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:13.491 00:14:13.491 --- 10.0.0.1 ping statistics --- 00:14:13.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.491 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1723340 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1723340 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1723340 ']' 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.492 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.492 [2024-05-15 16:35:20.668098] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:14:13.492 [2024-05-15 16:35:20.668172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.492 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.750 [2024-05-15 16:35:20.750862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.750 [2024-05-15 16:35:20.841305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.750 [2024-05-15 16:35:20.841368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.750 [2024-05-15 16:35:20.841395] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.750 [2024-05-15 16:35:20.841408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.750 [2024-05-15 16:35:20.841420] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.750 [2024-05-15 16:35:20.841462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.750 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:13.750 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:13.750 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.750 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.750 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 [2024-05-15 16:35:20.992311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.009 16:35:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 [2024-05-15 16:35:21.008303] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:14.009 [2024-05-15 16:35:21.008606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 NULL1 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.009 16:35:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:14.009 [2024-05-15 16:35:21.054667] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:14:14.009 [2024-05-15 16:35:21.054710] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723484 ] 00:14:14.009 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.267 Attached to nqn.2016-06.io.spdk:cnode1 00:14:14.267 Namespace ID: 1 size: 1GB 00:14:14.267 fused_ordering(0) 00:14:14.267 fused_ordering(1) 00:14:14.267 fused_ordering(2) 00:14:14.267 fused_ordering(3) 00:14:14.267 fused_ordering(4) 00:14:14.267 fused_ordering(5) 00:14:14.267 fused_ordering(6) 00:14:14.267 fused_ordering(7) 00:14:14.267 fused_ordering(8) 00:14:14.267 fused_ordering(9) 00:14:14.267 fused_ordering(10) 00:14:14.267 fused_ordering(11) 00:14:14.267 fused_ordering(12) 00:14:14.267 fused_ordering(13) 00:14:14.267 fused_ordering(14) 00:14:14.267 fused_ordering(15) 00:14:14.267 fused_ordering(16) 00:14:14.267 fused_ordering(17) 00:14:14.267 fused_ordering(18) 00:14:14.267 fused_ordering(19) 00:14:14.267 fused_ordering(20) 00:14:14.267 fused_ordering(21) 00:14:14.267 fused_ordering(22) 00:14:14.267 fused_ordering(23) 00:14:14.267 fused_ordering(24) 00:14:14.267 fused_ordering(25) 00:14:14.267 fused_ordering(26) 00:14:14.267 fused_ordering(27) 00:14:14.267 fused_ordering(28) 00:14:14.267 fused_ordering(29) 00:14:14.267 fused_ordering(30) 00:14:14.267 fused_ordering(31) 00:14:14.267 fused_ordering(32) 00:14:14.267 fused_ordering(33) 00:14:14.267 fused_ordering(34) 00:14:14.267 fused_ordering(35) 00:14:14.267 fused_ordering(36) 00:14:14.267 fused_ordering(37) 00:14:14.267 fused_ordering(38) 00:14:14.267 fused_ordering(39) 00:14:14.267 fused_ordering(40) 00:14:14.267 fused_ordering(41) 00:14:14.267 fused_ordering(42) 00:14:14.267 fused_ordering(43) 00:14:14.267 fused_ordering(44) 00:14:14.267 fused_ordering(45) 00:14:14.267 fused_ordering(46) 00:14:14.267 fused_ordering(47) 00:14:14.267 fused_ordering(48) 00:14:14.267 fused_ordering(49) 00:14:14.267 fused_ordering(50) 00:14:14.267 fused_ordering(51) 00:14:14.267 fused_ordering(52) 00:14:14.267 fused_ordering(53) 00:14:14.267 fused_ordering(54) 00:14:14.267 fused_ordering(55) 00:14:14.267 fused_ordering(56) 00:14:14.267 fused_ordering(57) 00:14:14.267 fused_ordering(58) 00:14:14.267 fused_ordering(59) 00:14:14.267 fused_ordering(60) 00:14:14.267 fused_ordering(61) 00:14:14.267 fused_ordering(62) 00:14:14.267 fused_ordering(63) 00:14:14.267 fused_ordering(64) 00:14:14.267 fused_ordering(65) 00:14:14.267 fused_ordering(66) 00:14:14.267 fused_ordering(67) 00:14:14.267 fused_ordering(68) 00:14:14.267 fused_ordering(69) 00:14:14.267 fused_ordering(70) 00:14:14.267 fused_ordering(71) 00:14:14.267 fused_ordering(72) 00:14:14.267 fused_ordering(73) 00:14:14.267 fused_ordering(74) 00:14:14.267 fused_ordering(75) 00:14:14.267 fused_ordering(76) 00:14:14.267 fused_ordering(77) 00:14:14.267 fused_ordering(78) 00:14:14.267 fused_ordering(79) 00:14:14.267 fused_ordering(80) 00:14:14.267 fused_ordering(81) 00:14:14.267 fused_ordering(82) 00:14:14.267 fused_ordering(83) 00:14:14.267 fused_ordering(84) 00:14:14.267 fused_ordering(85) 00:14:14.267 fused_ordering(86) 00:14:14.267 fused_ordering(87) 00:14:14.267 fused_ordering(88) 00:14:14.267 fused_ordering(89) 00:14:14.267 fused_ordering(90) 00:14:14.267 fused_ordering(91) 00:14:14.267 fused_ordering(92) 00:14:14.267 fused_ordering(93) 00:14:14.267 fused_ordering(94) 00:14:14.267 fused_ordering(95) 00:14:14.267 fused_ordering(96) 00:14:14.267 fused_ordering(97) 00:14:14.267 fused_ordering(98) 00:14:14.267 fused_ordering(99) 00:14:14.267 fused_ordering(100) 00:14:14.267 fused_ordering(101) 00:14:14.267 fused_ordering(102) 00:14:14.267 fused_ordering(103) 00:14:14.267 fused_ordering(104) 00:14:14.267 fused_ordering(105) 00:14:14.267 fused_ordering(106) 00:14:14.267 fused_ordering(107) 00:14:14.267 fused_ordering(108) 00:14:14.267 fused_ordering(109) 00:14:14.267 fused_ordering(110) 00:14:14.267 fused_ordering(111) 00:14:14.267 fused_ordering(112) 00:14:14.267 fused_ordering(113) 00:14:14.267 fused_ordering(114) 00:14:14.267 fused_ordering(115) 00:14:14.267 fused_ordering(116) 00:14:14.267 fused_ordering(117) 00:14:14.267 fused_ordering(118) 00:14:14.267 fused_ordering(119) 00:14:14.267 fused_ordering(120) 00:14:14.267 fused_ordering(121) 00:14:14.267 fused_ordering(122) 00:14:14.267 fused_ordering(123) 00:14:14.267 fused_ordering(124) 00:14:14.267 fused_ordering(125) 00:14:14.267 fused_ordering(126) 00:14:14.267 fused_ordering(127) 00:14:14.267 fused_ordering(128) 00:14:14.267 fused_ordering(129) 00:14:14.267 fused_ordering(130) 00:14:14.267 fused_ordering(131) 00:14:14.267 fused_ordering(132) 00:14:14.267 fused_ordering(133) 00:14:14.267 fused_ordering(134) 00:14:14.267 fused_ordering(135) 00:14:14.267 fused_ordering(136) 00:14:14.267 fused_ordering(137) 00:14:14.267 fused_ordering(138) 00:14:14.267 fused_ordering(139) 00:14:14.267 fused_ordering(140) 00:14:14.267 fused_ordering(141) 00:14:14.267 fused_ordering(142) 00:14:14.267 fused_ordering(143) 00:14:14.267 fused_ordering(144) 00:14:14.267 fused_ordering(145) 00:14:14.267 fused_ordering(146) 00:14:14.267 fused_ordering(147) 00:14:14.267 fused_ordering(148) 00:14:14.267 fused_ordering(149) 00:14:14.267 fused_ordering(150) 00:14:14.267 fused_ordering(151) 00:14:14.267 fused_ordering(152) 00:14:14.267 fused_ordering(153) 00:14:14.267 fused_ordering(154) 00:14:14.267 fused_ordering(155) 00:14:14.267 fused_ordering(156) 00:14:14.267 fused_ordering(157) 00:14:14.267 fused_ordering(158) 00:14:14.267 fused_ordering(159) 00:14:14.267 fused_ordering(160) 00:14:14.267 fused_ordering(161) 00:14:14.267 fused_ordering(162) 00:14:14.267 fused_ordering(163) 00:14:14.267 fused_ordering(164) 00:14:14.267 fused_ordering(165) 00:14:14.267 fused_ordering(166) 00:14:14.267 fused_ordering(167) 00:14:14.267 fused_ordering(168) 00:14:14.267 fused_ordering(169) 00:14:14.267 fused_ordering(170) 00:14:14.267 fused_ordering(171) 00:14:14.267 fused_ordering(172) 00:14:14.267 fused_ordering(173) 00:14:14.267 fused_ordering(174) 00:14:14.267 fused_ordering(175) 00:14:14.267 fused_ordering(176) 00:14:14.267 fused_ordering(177) 00:14:14.267 fused_ordering(178) 00:14:14.267 fused_ordering(179) 00:14:14.267 fused_ordering(180) 00:14:14.267 fused_ordering(181) 00:14:14.267 fused_ordering(182) 00:14:14.267 fused_ordering(183) 00:14:14.267 fused_ordering(184) 00:14:14.267 fused_ordering(185) 00:14:14.267 fused_ordering(186) 00:14:14.267 fused_ordering(187) 00:14:14.267 fused_ordering(188) 00:14:14.267 fused_ordering(189) 00:14:14.267 fused_ordering(190) 00:14:14.267 fused_ordering(191) 00:14:14.267 fused_ordering(192) 00:14:14.267 fused_ordering(193) 00:14:14.267 fused_ordering(194) 00:14:14.267 fused_ordering(195) 00:14:14.267 fused_ordering(196) 00:14:14.267 fused_ordering(197) 00:14:14.267 fused_ordering(198) 00:14:14.267 fused_ordering(199) 00:14:14.267 fused_ordering(200) 00:14:14.267 fused_ordering(201) 00:14:14.267 fused_ordering(202) 00:14:14.267 fused_ordering(203) 00:14:14.267 fused_ordering(204) 00:14:14.267 fused_ordering(205) 00:14:14.833 fused_ordering(206) 00:14:14.833 fused_ordering(207) 00:14:14.833 fused_ordering(208) 00:14:14.833 fused_ordering(209) 00:14:14.833 fused_ordering(210) 00:14:14.833 fused_ordering(211) 00:14:14.833 fused_ordering(212) 00:14:14.833 fused_ordering(213) 00:14:14.833 fused_ordering(214) 00:14:14.833 fused_ordering(215) 00:14:14.833 fused_ordering(216) 00:14:14.833 fused_ordering(217) 00:14:14.833 fused_ordering(218) 00:14:14.833 fused_ordering(219) 00:14:14.833 fused_ordering(220) 00:14:14.833 fused_ordering(221) 00:14:14.833 fused_ordering(222) 00:14:14.833 fused_ordering(223) 00:14:14.833 fused_ordering(224) 00:14:14.833 fused_ordering(225) 00:14:14.833 fused_ordering(226) 00:14:14.833 fused_ordering(227) 00:14:14.833 fused_ordering(228) 00:14:14.833 fused_ordering(229) 00:14:14.833 fused_ordering(230) 00:14:14.833 fused_ordering(231) 00:14:14.833 fused_ordering(232) 00:14:14.833 fused_ordering(233) 00:14:14.833 fused_ordering(234) 00:14:14.833 fused_ordering(235) 00:14:14.833 fused_ordering(236) 00:14:14.833 fused_ordering(237) 00:14:14.833 fused_ordering(238) 00:14:14.833 fused_ordering(239) 00:14:14.833 fused_ordering(240) 00:14:14.833 fused_ordering(241) 00:14:14.833 fused_ordering(242) 00:14:14.833 fused_ordering(243) 00:14:14.833 fused_ordering(244) 00:14:14.833 fused_ordering(245) 00:14:14.833 fused_ordering(246) 00:14:14.833 fused_ordering(247) 00:14:14.833 fused_ordering(248) 00:14:14.833 fused_ordering(249) 00:14:14.833 fused_ordering(250) 00:14:14.833 fused_ordering(251) 00:14:14.833 fused_ordering(252) 00:14:14.833 fused_ordering(253) 00:14:14.833 fused_ordering(254) 00:14:14.833 fused_ordering(255) 00:14:14.833 fused_ordering(256) 00:14:14.833 fused_ordering(257) 00:14:14.833 fused_ordering(258) 00:14:14.833 fused_ordering(259) 00:14:14.833 fused_ordering(260) 00:14:14.833 fused_ordering(261) 00:14:14.833 fused_ordering(262) 00:14:14.833 fused_ordering(263) 00:14:14.833 fused_ordering(264) 00:14:14.833 fused_ordering(265) 00:14:14.833 fused_ordering(266) 00:14:14.833 fused_ordering(267) 00:14:14.833 fused_ordering(268) 00:14:14.833 fused_ordering(269) 00:14:14.833 fused_ordering(270) 00:14:14.833 fused_ordering(271) 00:14:14.833 fused_ordering(272) 00:14:14.833 fused_ordering(273) 00:14:14.833 fused_ordering(274) 00:14:14.833 fused_ordering(275) 00:14:14.833 fused_ordering(276) 00:14:14.833 fused_ordering(277) 00:14:14.833 fused_ordering(278) 00:14:14.833 fused_ordering(279) 00:14:14.833 fused_ordering(280) 00:14:14.833 fused_ordering(281) 00:14:14.833 fused_ordering(282) 00:14:14.833 fused_ordering(283) 00:14:14.833 fused_ordering(284) 00:14:14.833 fused_ordering(285) 00:14:14.833 fused_ordering(286) 00:14:14.833 fused_ordering(287) 00:14:14.833 fused_ordering(288) 00:14:14.833 fused_ordering(289) 00:14:14.833 fused_ordering(290) 00:14:14.833 fused_ordering(291) 00:14:14.833 fused_ordering(292) 00:14:14.833 fused_ordering(293) 00:14:14.833 fused_ordering(294) 00:14:14.833 fused_ordering(295) 00:14:14.833 fused_ordering(296) 00:14:14.833 fused_ordering(297) 00:14:14.833 fused_ordering(298) 00:14:14.833 fused_ordering(299) 00:14:14.833 fused_ordering(300) 00:14:14.833 fused_ordering(301) 00:14:14.833 fused_ordering(302) 00:14:14.833 fused_ordering(303) 00:14:14.833 fused_ordering(304) 00:14:14.833 fused_ordering(305) 00:14:14.833 fused_ordering(306) 00:14:14.833 fused_ordering(307) 00:14:14.833 fused_ordering(308) 00:14:14.833 fused_ordering(309) 00:14:14.833 fused_ordering(310) 00:14:14.833 fused_ordering(311) 00:14:14.833 fused_ordering(312) 00:14:14.833 fused_ordering(313) 00:14:14.833 fused_ordering(314) 00:14:14.833 fused_ordering(315) 00:14:14.833 fused_ordering(316) 00:14:14.833 fused_ordering(317) 00:14:14.833 fused_ordering(318) 00:14:14.833 fused_ordering(319) 00:14:14.833 fused_ordering(320) 00:14:14.833 fused_ordering(321) 00:14:14.833 fused_ordering(322) 00:14:14.833 fused_ordering(323) 00:14:14.833 fused_ordering(324) 00:14:14.833 fused_ordering(325) 00:14:14.833 fused_ordering(326) 00:14:14.833 fused_ordering(327) 00:14:14.833 fused_ordering(328) 00:14:14.833 fused_ordering(329) 00:14:14.833 fused_ordering(330) 00:14:14.833 fused_ordering(331) 00:14:14.833 fused_ordering(332) 00:14:14.833 fused_ordering(333) 00:14:14.833 fused_ordering(334) 00:14:14.833 fused_ordering(335) 00:14:14.833 fused_ordering(336) 00:14:14.833 fused_ordering(337) 00:14:14.833 fused_ordering(338) 00:14:14.833 fused_ordering(339) 00:14:14.833 fused_ordering(340) 00:14:14.833 fused_ordering(341) 00:14:14.833 fused_ordering(342) 00:14:14.833 fused_ordering(343) 00:14:14.833 fused_ordering(344) 00:14:14.833 fused_ordering(345) 00:14:14.833 fused_ordering(346) 00:14:14.833 fused_ordering(347) 00:14:14.833 fused_ordering(348) 00:14:14.833 fused_ordering(349) 00:14:14.833 fused_ordering(350) 00:14:14.833 fused_ordering(351) 00:14:14.833 fused_ordering(352) 00:14:14.833 fused_ordering(353) 00:14:14.833 fused_ordering(354) 00:14:14.833 fused_ordering(355) 00:14:14.833 fused_ordering(356) 00:14:14.833 fused_ordering(357) 00:14:14.833 fused_ordering(358) 00:14:14.833 fused_ordering(359) 00:14:14.833 fused_ordering(360) 00:14:14.833 fused_ordering(361) 00:14:14.833 fused_ordering(362) 00:14:14.833 fused_ordering(363) 00:14:14.833 fused_ordering(364) 00:14:14.833 fused_ordering(365) 00:14:14.833 fused_ordering(366) 00:14:14.833 fused_ordering(367) 00:14:14.833 fused_ordering(368) 00:14:14.833 fused_ordering(369) 00:14:14.833 fused_ordering(370) 00:14:14.833 fused_ordering(371) 00:14:14.833 fused_ordering(372) 00:14:14.833 fused_ordering(373) 00:14:14.833 fused_ordering(374) 00:14:14.833 fused_ordering(375) 00:14:14.833 fused_ordering(376) 00:14:14.833 fused_ordering(377) 00:14:14.833 fused_ordering(378) 00:14:14.833 fused_ordering(379) 00:14:14.833 fused_ordering(380) 00:14:14.833 fused_ordering(381) 00:14:14.833 fused_ordering(382) 00:14:14.833 fused_ordering(383) 00:14:14.833 fused_ordering(384) 00:14:14.833 fused_ordering(385) 00:14:14.833 fused_ordering(386) 00:14:14.833 fused_ordering(387) 00:14:14.833 fused_ordering(388) 00:14:14.833 fused_ordering(389) 00:14:14.833 fused_ordering(390) 00:14:14.833 fused_ordering(391) 00:14:14.833 fused_ordering(392) 00:14:14.833 fused_ordering(393) 00:14:14.833 fused_ordering(394) 00:14:14.833 fused_ordering(395) 00:14:14.833 fused_ordering(396) 00:14:14.833 fused_ordering(397) 00:14:14.833 fused_ordering(398) 00:14:14.833 fused_ordering(399) 00:14:14.833 fused_ordering(400) 00:14:14.833 fused_ordering(401) 00:14:14.833 fused_ordering(402) 00:14:14.833 fused_ordering(403) 00:14:14.833 fused_ordering(404) 00:14:14.833 fused_ordering(405) 00:14:14.833 fused_ordering(406) 00:14:14.833 fused_ordering(407) 00:14:14.833 fused_ordering(408) 00:14:14.833 fused_ordering(409) 00:14:14.833 fused_ordering(410) 00:14:15.399 fused_ordering(411) 00:14:15.399 fused_ordering(412) 00:14:15.399 fused_ordering(413) 00:14:15.399 fused_ordering(414) 00:14:15.399 fused_ordering(415) 00:14:15.399 fused_ordering(416) 00:14:15.399 fused_ordering(417) 00:14:15.399 fused_ordering(418) 00:14:15.399 fused_ordering(419) 00:14:15.399 fused_ordering(420) 00:14:15.399 fused_ordering(421) 00:14:15.399 fused_ordering(422) 00:14:15.399 fused_ordering(423) 00:14:15.399 fused_ordering(424) 00:14:15.399 fused_ordering(425) 00:14:15.399 fused_ordering(426) 00:14:15.399 fused_ordering(427) 00:14:15.399 fused_ordering(428) 00:14:15.399 fused_ordering(429) 00:14:15.399 fused_ordering(430) 00:14:15.399 fused_ordering(431) 00:14:15.399 fused_ordering(432) 00:14:15.399 fused_ordering(433) 00:14:15.399 fused_ordering(434) 00:14:15.399 fused_ordering(435) 00:14:15.399 fused_ordering(436) 00:14:15.399 fused_ordering(437) 00:14:15.399 fused_ordering(438) 00:14:15.399 fused_ordering(439) 00:14:15.399 fused_ordering(440) 00:14:15.399 fused_ordering(441) 00:14:15.399 fused_ordering(442) 00:14:15.399 fused_ordering(443) 00:14:15.399 fused_ordering(444) 00:14:15.399 fused_ordering(445) 00:14:15.399 fused_ordering(446) 00:14:15.399 fused_ordering(447) 00:14:15.399 fused_ordering(448) 00:14:15.399 fused_ordering(449) 00:14:15.399 fused_ordering(450) 00:14:15.399 fused_ordering(451) 00:14:15.399 fused_ordering(452) 00:14:15.399 fused_ordering(453) 00:14:15.399 fused_ordering(454) 00:14:15.399 fused_ordering(455) 00:14:15.399 fused_ordering(456) 00:14:15.399 fused_ordering(457) 00:14:15.399 fused_ordering(458) 00:14:15.399 fused_ordering(459) 00:14:15.399 fused_ordering(460) 00:14:15.399 fused_ordering(461) 00:14:15.399 fused_ordering(462) 00:14:15.399 fused_ordering(463) 00:14:15.399 fused_ordering(464) 00:14:15.399 fused_ordering(465) 00:14:15.399 fused_ordering(466) 00:14:15.399 fused_ordering(467) 00:14:15.399 fused_ordering(468) 00:14:15.399 fused_ordering(469) 00:14:15.399 fused_ordering(470) 00:14:15.399 fused_ordering(471) 00:14:15.399 fused_ordering(472) 00:14:15.399 fused_ordering(473) 00:14:15.399 fused_ordering(474) 00:14:15.399 fused_ordering(475) 00:14:15.399 fused_ordering(476) 00:14:15.399 fused_ordering(477) 00:14:15.399 fused_ordering(478) 00:14:15.399 fused_ordering(479) 00:14:15.399 fused_ordering(480) 00:14:15.399 fused_ordering(481) 00:14:15.399 fused_ordering(482) 00:14:15.399 fused_ordering(483) 00:14:15.399 fused_ordering(484) 00:14:15.399 fused_ordering(485) 00:14:15.399 fused_ordering(486) 00:14:15.399 fused_ordering(487) 00:14:15.399 fused_ordering(488) 00:14:15.399 fused_ordering(489) 00:14:15.399 fused_ordering(490) 00:14:15.399 fused_ordering(491) 00:14:15.399 fused_ordering(492) 00:14:15.399 fused_ordering(493) 00:14:15.399 fused_ordering(494) 00:14:15.399 fused_ordering(495) 00:14:15.399 fused_ordering(496) 00:14:15.399 fused_ordering(497) 00:14:15.399 fused_ordering(498) 00:14:15.399 fused_ordering(499) 00:14:15.399 fused_ordering(500) 00:14:15.399 fused_ordering(501) 00:14:15.399 fused_ordering(502) 00:14:15.399 fused_ordering(503) 00:14:15.399 fused_ordering(504) 00:14:15.399 fused_ordering(505) 00:14:15.399 fused_ordering(506) 00:14:15.399 fused_ordering(507) 00:14:15.399 fused_ordering(508) 00:14:15.399 fused_ordering(509) 00:14:15.399 fused_ordering(510) 00:14:15.399 fused_ordering(511) 00:14:15.399 fused_ordering(512) 00:14:15.399 fused_ordering(513) 00:14:15.399 fused_ordering(514) 00:14:15.399 fused_ordering(515) 00:14:15.399 fused_ordering(516) 00:14:15.399 fused_ordering(517) 00:14:15.399 fused_ordering(518) 00:14:15.399 fused_ordering(519) 00:14:15.399 fused_ordering(520) 00:14:15.399 fused_ordering(521) 00:14:15.399 fused_ordering(522) 00:14:15.399 fused_ordering(523) 00:14:15.399 fused_ordering(524) 00:14:15.399 fused_ordering(525) 00:14:15.399 fused_ordering(526) 00:14:15.399 fused_ordering(527) 00:14:15.399 fused_ordering(528) 00:14:15.399 fused_ordering(529) 00:14:15.399 fused_ordering(530) 00:14:15.399 fused_ordering(531) 00:14:15.399 fused_ordering(532) 00:14:15.399 fused_ordering(533) 00:14:15.399 fused_ordering(534) 00:14:15.399 fused_ordering(535) 00:14:15.399 fused_ordering(536) 00:14:15.399 fused_ordering(537) 00:14:15.399 fused_ordering(538) 00:14:15.399 fused_ordering(539) 00:14:15.399 fused_ordering(540) 00:14:15.399 fused_ordering(541) 00:14:15.399 fused_ordering(542) 00:14:15.399 fused_ordering(543) 00:14:15.399 fused_ordering(544) 00:14:15.399 fused_ordering(545) 00:14:15.399 fused_ordering(546) 00:14:15.399 fused_ordering(547) 00:14:15.399 fused_ordering(548) 00:14:15.399 fused_ordering(549) 00:14:15.399 fused_ordering(550) 00:14:15.400 fused_ordering(551) 00:14:15.400 fused_ordering(552) 00:14:15.400 fused_ordering(553) 00:14:15.400 fused_ordering(554) 00:14:15.400 fused_ordering(555) 00:14:15.400 fused_ordering(556) 00:14:15.400 fused_ordering(557) 00:14:15.400 fused_ordering(558) 00:14:15.400 fused_ordering(559) 00:14:15.400 fused_ordering(560) 00:14:15.400 fused_ordering(561) 00:14:15.400 fused_ordering(562) 00:14:15.400 fused_ordering(563) 00:14:15.400 fused_ordering(564) 00:14:15.400 fused_ordering(565) 00:14:15.400 fused_ordering(566) 00:14:15.400 fused_ordering(567) 00:14:15.400 fused_ordering(568) 00:14:15.400 fused_ordering(569) 00:14:15.400 fused_ordering(570) 00:14:15.400 fused_ordering(571) 00:14:15.400 fused_ordering(572) 00:14:15.400 fused_ordering(573) 00:14:15.400 fused_ordering(574) 00:14:15.400 fused_ordering(575) 00:14:15.400 fused_ordering(576) 00:14:15.400 fused_ordering(577) 00:14:15.400 fused_ordering(578) 00:14:15.400 fused_ordering(579) 00:14:15.400 fused_ordering(580) 00:14:15.400 fused_ordering(581) 00:14:15.400 fused_ordering(582) 00:14:15.400 fused_ordering(583) 00:14:15.400 fused_ordering(584) 00:14:15.400 fused_ordering(585) 00:14:15.400 fused_ordering(586) 00:14:15.400 fused_ordering(587) 00:14:15.400 fused_ordering(588) 00:14:15.400 fused_ordering(589) 00:14:15.400 fused_ordering(590) 00:14:15.400 fused_ordering(591) 00:14:15.400 fused_ordering(592) 00:14:15.400 fused_ordering(593) 00:14:15.400 fused_ordering(594) 00:14:15.400 fused_ordering(595) 00:14:15.400 fused_ordering(596) 00:14:15.400 fused_ordering(597) 00:14:15.400 fused_ordering(598) 00:14:15.400 fused_ordering(599) 00:14:15.400 fused_ordering(600) 00:14:15.400 fused_ordering(601) 00:14:15.400 fused_ordering(602) 00:14:15.400 fused_ordering(603) 00:14:15.400 fused_ordering(604) 00:14:15.400 fused_ordering(605) 00:14:15.400 fused_ordering(606) 00:14:15.400 fused_ordering(607) 00:14:15.400 fused_ordering(608) 00:14:15.400 fused_ordering(609) 00:14:15.400 fused_ordering(610) 00:14:15.400 fused_ordering(611) 00:14:15.400 fused_ordering(612) 00:14:15.400 fused_ordering(613) 00:14:15.400 fused_ordering(614) 00:14:15.400 fused_ordering(615) 00:14:15.965 fused_ordering(616) 00:14:15.965 fused_ordering(617) 00:14:15.965 fused_ordering(618) 00:14:15.965 fused_ordering(619) 00:14:15.965 fused_ordering(620) 00:14:15.965 fused_ordering(621) 00:14:15.965 fused_ordering(622) 00:14:15.965 fused_ordering(623) 00:14:15.965 fused_ordering(624) 00:14:15.965 fused_ordering(625) 00:14:15.965 fused_ordering(626) 00:14:15.965 fused_ordering(627) 00:14:15.965 fused_ordering(628) 00:14:15.965 fused_ordering(629) 00:14:15.965 fused_ordering(630) 00:14:15.965 fused_ordering(631) 00:14:15.965 fused_ordering(632) 00:14:15.965 fused_ordering(633) 00:14:15.965 fused_ordering(634) 00:14:15.965 fused_ordering(635) 00:14:15.965 fused_ordering(636) 00:14:15.965 fused_ordering(637) 00:14:15.965 fused_ordering(638) 00:14:15.965 fused_ordering(639) 00:14:15.965 fused_ordering(640) 00:14:15.965 fused_ordering(641) 00:14:15.965 fused_ordering(642) 00:14:15.965 fused_ordering(643) 00:14:15.965 fused_ordering(644) 00:14:15.965 fused_ordering(645) 00:14:15.965 fused_ordering(646) 00:14:15.965 fused_ordering(647) 00:14:15.965 fused_ordering(648) 00:14:15.965 fused_ordering(649) 00:14:15.965 fused_ordering(650) 00:14:15.965 fused_ordering(651) 00:14:15.965 fused_ordering(652) 00:14:15.965 fused_ordering(653) 00:14:15.965 fused_ordering(654) 00:14:15.965 fused_ordering(655) 00:14:15.965 fused_ordering(656) 00:14:15.965 fused_ordering(657) 00:14:15.965 fused_ordering(658) 00:14:15.965 fused_ordering(659) 00:14:15.965 fused_ordering(660) 00:14:15.965 fused_ordering(661) 00:14:15.965 fused_ordering(662) 00:14:15.965 fused_ordering(663) 00:14:15.965 fused_ordering(664) 00:14:15.965 fused_ordering(665) 00:14:15.965 fused_ordering(666) 00:14:15.965 fused_ordering(667) 00:14:15.965 fused_ordering(668) 00:14:15.965 fused_ordering(669) 00:14:15.965 fused_ordering(670) 00:14:15.965 fused_ordering(671) 00:14:15.965 fused_ordering(672) 00:14:15.965 fused_ordering(673) 00:14:15.965 fused_ordering(674) 00:14:15.965 fused_ordering(675) 00:14:15.965 fused_ordering(676) 00:14:15.965 fused_ordering(677) 00:14:15.965 fused_ordering(678) 00:14:15.965 fused_ordering(679) 00:14:15.965 fused_ordering(680) 00:14:15.965 fused_ordering(681) 00:14:15.965 fused_ordering(682) 00:14:15.965 fused_ordering(683) 00:14:15.965 fused_ordering(684) 00:14:15.965 fused_ordering(685) 00:14:15.965 fused_ordering(686) 00:14:15.965 fused_ordering(687) 00:14:15.965 fused_ordering(688) 00:14:15.965 fused_ordering(689) 00:14:15.965 fused_ordering(690) 00:14:15.965 fused_ordering(691) 00:14:15.965 fused_ordering(692) 00:14:15.965 fused_ordering(693) 00:14:15.965 fused_ordering(694) 00:14:15.965 fused_ordering(695) 00:14:15.965 fused_ordering(696) 00:14:15.965 fused_ordering(697) 00:14:15.965 fused_ordering(698) 00:14:15.965 fused_ordering(699) 00:14:15.965 fused_ordering(700) 00:14:15.965 fused_ordering(701) 00:14:15.965 fused_ordering(702) 00:14:15.965 fused_ordering(703) 00:14:15.965 fused_ordering(704) 00:14:15.965 fused_ordering(705) 00:14:15.965 fused_ordering(706) 00:14:15.965 fused_ordering(707) 00:14:15.965 fused_ordering(708) 00:14:15.965 fused_ordering(709) 00:14:15.965 fused_ordering(710) 00:14:15.965 fused_ordering(711) 00:14:15.965 fused_ordering(712) 00:14:15.965 fused_ordering(713) 00:14:15.965 fused_ordering(714) 00:14:15.965 fused_ordering(715) 00:14:15.965 fused_ordering(716) 00:14:15.965 fused_ordering(717) 00:14:15.965 fused_ordering(718) 00:14:15.965 fused_ordering(719) 00:14:15.965 fused_ordering(720) 00:14:15.965 fused_ordering(721) 00:14:15.965 fused_ordering(722) 00:14:15.965 fused_ordering(723) 00:14:15.965 fused_ordering(724) 00:14:15.965 fused_ordering(725) 00:14:15.965 fused_ordering(726) 00:14:15.965 fused_ordering(727) 00:14:15.965 fused_ordering(728) 00:14:15.965 fused_ordering(729) 00:14:15.965 fused_ordering(730) 00:14:15.965 fused_ordering(731) 00:14:15.965 fused_ordering(732) 00:14:15.965 fused_ordering(733) 00:14:15.965 fused_ordering(734) 00:14:15.965 fused_ordering(735) 00:14:15.965 fused_ordering(736) 00:14:15.965 fused_ordering(737) 00:14:15.965 fused_ordering(738) 00:14:15.965 fused_ordering(739) 00:14:15.965 fused_ordering(740) 00:14:15.965 fused_ordering(741) 00:14:15.965 fused_ordering(742) 00:14:15.965 fused_ordering(743) 00:14:15.965 fused_ordering(744) 00:14:15.965 fused_ordering(745) 00:14:15.965 fused_ordering(746) 00:14:15.965 fused_ordering(747) 00:14:15.965 fused_ordering(748) 00:14:15.965 fused_ordering(749) 00:14:15.965 fused_ordering(750) 00:14:15.965 fused_ordering(751) 00:14:15.965 fused_ordering(752) 00:14:15.965 fused_ordering(753) 00:14:15.965 fused_ordering(754) 00:14:15.965 fused_ordering(755) 00:14:15.965 fused_ordering(756) 00:14:15.965 fused_ordering(757) 00:14:15.965 fused_ordering(758) 00:14:15.965 fused_ordering(759) 00:14:15.965 fused_ordering(760) 00:14:15.965 fused_ordering(761) 00:14:15.965 fused_ordering(762) 00:14:15.965 fused_ordering(763) 00:14:15.965 fused_ordering(764) 00:14:15.965 fused_ordering(765) 00:14:15.965 fused_ordering(766) 00:14:15.965 fused_ordering(767) 00:14:15.965 fused_ordering(768) 00:14:15.965 fused_ordering(769) 00:14:15.965 fused_ordering(770) 00:14:15.965 fused_ordering(771) 00:14:15.965 fused_ordering(772) 00:14:15.965 fused_ordering(773) 00:14:15.965 fused_ordering(774) 00:14:15.965 fused_ordering(775) 00:14:15.965 fused_ordering(776) 00:14:15.965 fused_ordering(777) 00:14:15.965 fused_ordering(778) 00:14:15.965 fused_ordering(779) 00:14:15.965 fused_ordering(780) 00:14:15.965 fused_ordering(781) 00:14:15.965 fused_ordering(782) 00:14:15.965 fused_ordering(783) 00:14:15.965 fused_ordering(784) 00:14:15.965 fused_ordering(785) 00:14:15.965 fused_ordering(786) 00:14:15.965 fused_ordering(787) 00:14:15.965 fused_ordering(788) 00:14:15.965 fused_ordering(789) 00:14:15.965 fused_ordering(790) 00:14:15.965 fused_ordering(791) 00:14:15.965 fused_ordering(792) 00:14:15.965 fused_ordering(793) 00:14:15.965 fused_ordering(794) 00:14:15.965 fused_ordering(795) 00:14:15.965 fused_ordering(796) 00:14:15.965 fused_ordering(797) 00:14:15.965 fused_ordering(798) 00:14:15.965 fused_ordering(799) 00:14:15.965 fused_ordering(800) 00:14:15.965 fused_ordering(801) 00:14:15.965 fused_ordering(802) 00:14:15.965 fused_ordering(803) 00:14:15.965 fused_ordering(804) 00:14:15.965 fused_ordering(805) 00:14:15.965 fused_ordering(806) 00:14:15.965 fused_ordering(807) 00:14:15.965 fused_ordering(808) 00:14:15.965 fused_ordering(809) 00:14:15.965 fused_ordering(810) 00:14:15.965 fused_ordering(811) 00:14:15.965 fused_ordering(812) 00:14:15.965 fused_ordering(813) 00:14:15.965 fused_ordering(814) 00:14:15.965 fused_ordering(815) 00:14:15.965 fused_ordering(816) 00:14:15.965 fused_ordering(817) 00:14:15.965 fused_ordering(818) 00:14:15.965 fused_ordering(819) 00:14:15.965 fused_ordering(820) 00:14:16.897 fused_ordering(821) 00:14:16.897 fused_ordering(822) 00:14:16.897 fused_ordering(823) 00:14:16.897 fused_ordering(824) 00:14:16.897 fused_ordering(825) 00:14:16.897 fused_ordering(826) 00:14:16.897 fused_ordering(827) 00:14:16.897 fused_ordering(828) 00:14:16.897 fused_ordering(829) 00:14:16.897 fused_ordering(830) 00:14:16.897 fused_ordering(831) 00:14:16.897 fused_ordering(832) 00:14:16.897 fused_ordering(833) 00:14:16.897 fused_ordering(834) 00:14:16.897 fused_ordering(835) 00:14:16.897 fused_ordering(836) 00:14:16.897 fused_ordering(837) 00:14:16.897 fused_ordering(838) 00:14:16.897 fused_ordering(839) 00:14:16.897 fused_ordering(840) 00:14:16.897 fused_ordering(841) 00:14:16.897 fused_ordering(842) 00:14:16.897 fused_ordering(843) 00:14:16.897 fused_ordering(844) 00:14:16.897 fused_ordering(845) 00:14:16.897 fused_ordering(846) 00:14:16.897 fused_ordering(847) 00:14:16.897 fused_ordering(848) 00:14:16.897 fused_ordering(849) 00:14:16.897 fused_ordering(850) 00:14:16.897 fused_ordering(851) 00:14:16.897 fused_ordering(852) 00:14:16.897 fused_ordering(853) 00:14:16.897 fused_ordering(854) 00:14:16.897 fused_ordering(855) 00:14:16.897 fused_ordering(856) 00:14:16.897 fused_ordering(857) 00:14:16.897 fused_ordering(858) 00:14:16.897 fused_ordering(859) 00:14:16.897 fused_ordering(860) 00:14:16.897 fused_ordering(861) 00:14:16.897 fused_ordering(862) 00:14:16.897 fused_ordering(863) 00:14:16.897 fused_ordering(864) 00:14:16.897 fused_ordering(865) 00:14:16.897 fused_ordering(866) 00:14:16.897 fused_ordering(867) 00:14:16.897 fused_ordering(868) 00:14:16.897 fused_ordering(869) 00:14:16.897 fused_ordering(870) 00:14:16.897 fused_ordering(871) 00:14:16.897 fused_ordering(872) 00:14:16.897 fused_ordering(873) 00:14:16.897 fused_ordering(874) 00:14:16.897 fused_ordering(875) 00:14:16.897 fused_ordering(876) 00:14:16.897 fused_ordering(877) 00:14:16.897 fused_ordering(878) 00:14:16.897 fused_ordering(879) 00:14:16.897 fused_ordering(880) 00:14:16.897 fused_ordering(881) 00:14:16.897 fused_ordering(882) 00:14:16.897 fused_ordering(883) 00:14:16.897 fused_ordering(884) 00:14:16.897 fused_ordering(885) 00:14:16.897 fused_ordering(886) 00:14:16.897 fused_ordering(887) 00:14:16.897 fused_ordering(888) 00:14:16.897 fused_ordering(889) 00:14:16.897 fused_ordering(890) 00:14:16.897 fused_ordering(891) 00:14:16.897 fused_ordering(892) 00:14:16.897 fused_ordering(893) 00:14:16.897 fused_ordering(894) 00:14:16.897 fused_ordering(895) 00:14:16.897 fused_ordering(896) 00:14:16.897 fused_ordering(897) 00:14:16.897 fused_ordering(898) 00:14:16.897 fused_ordering(899) 00:14:16.897 fused_ordering(900) 00:14:16.897 fused_ordering(901) 00:14:16.897 fused_ordering(902) 00:14:16.897 fused_ordering(903) 00:14:16.897 fused_ordering(904) 00:14:16.897 fused_ordering(905) 00:14:16.897 fused_ordering(906) 00:14:16.897 fused_ordering(907) 00:14:16.897 fused_ordering(908) 00:14:16.897 fused_ordering(909) 00:14:16.897 fused_ordering(910) 00:14:16.897 fused_ordering(911) 00:14:16.897 fused_ordering(912) 00:14:16.897 fused_ordering(913) 00:14:16.897 fused_ordering(914) 00:14:16.897 fused_ordering(915) 00:14:16.897 fused_ordering(916) 00:14:16.897 fused_ordering(917) 00:14:16.897 fused_ordering(918) 00:14:16.897 fused_ordering(919) 00:14:16.897 fused_ordering(920) 00:14:16.897 fused_ordering(921) 00:14:16.897 fused_ordering(922) 00:14:16.897 fused_ordering(923) 00:14:16.897 fused_ordering(924) 00:14:16.897 fused_ordering(925) 00:14:16.897 fused_ordering(926) 00:14:16.897 fused_ordering(927) 00:14:16.897 fused_ordering(928) 00:14:16.897 fused_ordering(929) 00:14:16.897 fused_ordering(930) 00:14:16.897 fused_ordering(931) 00:14:16.897 fused_ordering(932) 00:14:16.897 fused_ordering(933) 00:14:16.897 fused_ordering(934) 00:14:16.897 fused_ordering(935) 00:14:16.897 fused_ordering(936) 00:14:16.897 fused_ordering(937) 00:14:16.897 fused_ordering(938) 00:14:16.897 fused_ordering(939) 00:14:16.897 fused_ordering(940) 00:14:16.897 fused_ordering(941) 00:14:16.897 fused_ordering(942) 00:14:16.897 fused_ordering(943) 00:14:16.897 fused_ordering(944) 00:14:16.897 fused_ordering(945) 00:14:16.897 fused_ordering(946) 00:14:16.897 fused_ordering(947) 00:14:16.897 fused_ordering(948) 00:14:16.897 fused_ordering(949) 00:14:16.897 fused_ordering(950) 00:14:16.897 fused_ordering(951) 00:14:16.897 fused_ordering(952) 00:14:16.897 fused_ordering(953) 00:14:16.897 fused_ordering(954) 00:14:16.897 fused_ordering(955) 00:14:16.897 fused_ordering(956) 00:14:16.897 fused_ordering(957) 00:14:16.897 fused_ordering(958) 00:14:16.897 fused_ordering(959) 00:14:16.897 fused_ordering(960) 00:14:16.897 fused_ordering(961) 00:14:16.897 fused_ordering(962) 00:14:16.897 fused_ordering(963) 00:14:16.897 fused_ordering(964) 00:14:16.897 fused_ordering(965) 00:14:16.897 fused_ordering(966) 00:14:16.897 fused_ordering(967) 00:14:16.897 fused_ordering(968) 00:14:16.897 fused_ordering(969) 00:14:16.897 fused_ordering(970) 00:14:16.897 fused_ordering(971) 00:14:16.897 fused_ordering(972) 00:14:16.897 fused_ordering(973) 00:14:16.897 fused_ordering(974) 00:14:16.897 fused_ordering(975) 00:14:16.897 fused_ordering(976) 00:14:16.897 fused_ordering(977) 00:14:16.897 fused_ordering(978) 00:14:16.897 fused_ordering(979) 00:14:16.897 fused_ordering(980) 00:14:16.897 fused_ordering(981) 00:14:16.897 fused_ordering(982) 00:14:16.897 fused_ordering(983) 00:14:16.897 fused_ordering(984) 00:14:16.897 fused_ordering(985) 00:14:16.897 fused_ordering(986) 00:14:16.897 fused_ordering(987) 00:14:16.897 fused_ordering(988) 00:14:16.897 fused_ordering(989) 00:14:16.897 fused_ordering(990) 00:14:16.897 fused_ordering(991) 00:14:16.897 fused_ordering(992) 00:14:16.897 fused_ordering(993) 00:14:16.897 fused_ordering(994) 00:14:16.897 fused_ordering(995) 00:14:16.897 fused_ordering(996) 00:14:16.897 fused_ordering(997) 00:14:16.897 fused_ordering(998) 00:14:16.897 fused_ordering(999) 00:14:16.897 fused_ordering(1000) 00:14:16.897 fused_ordering(1001) 00:14:16.897 fused_ordering(1002) 00:14:16.897 fused_ordering(1003) 00:14:16.897 fused_ordering(1004) 00:14:16.897 fused_ordering(1005) 00:14:16.897 fused_ordering(1006) 00:14:16.897 fused_ordering(1007) 00:14:16.897 fused_ordering(1008) 00:14:16.897 fused_ordering(1009) 00:14:16.897 fused_ordering(1010) 00:14:16.897 fused_ordering(1011) 00:14:16.897 fused_ordering(1012) 00:14:16.897 fused_ordering(1013) 00:14:16.897 fused_ordering(1014) 00:14:16.897 fused_ordering(1015) 00:14:16.897 fused_ordering(1016) 00:14:16.897 fused_ordering(1017) 00:14:16.897 fused_ordering(1018) 00:14:16.897 fused_ordering(1019) 00:14:16.897 fused_ordering(1020) 00:14:16.897 fused_ordering(1021) 00:14:16.897 fused_ordering(1022) 00:14:16.897 fused_ordering(1023) 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.897 rmmod nvme_tcp 00:14:16.897 rmmod nvme_fabrics 00:14:16.897 rmmod nvme_keyring 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1723340 ']' 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1723340 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1723340 ']' 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1723340 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1723340 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1723340' 00:14:16.897 killing process with pid 1723340 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1723340 00:14:16.897 [2024-05-15 16:35:23.941103] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:16.897 16:35:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1723340 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.156 16:35:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.057 16:35:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.057 00:14:19.057 real 0m8.272s 00:14:19.057 user 0m5.510s 00:14:19.057 sys 0m3.883s 00:14:19.057 16:35:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:19.057 16:35:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:19.057 ************************************ 00:14:19.057 END TEST nvmf_fused_ordering 00:14:19.057 ************************************ 00:14:19.057 16:35:26 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:19.057 16:35:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:19.057 16:35:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.057 16:35:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.057 ************************************ 00:14:19.057 START TEST nvmf_delete_subsystem 00:14:19.057 ************************************ 00:14:19.057 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:19.315 * Looking for test storage... 00:14:19.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.315 16:35:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.844 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.844 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.844 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:21.845 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:21.845 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:21.845 Found net devices under 0000:09:00.0: cvl_0_0 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:21.845 Found net devices under 0000:09:00.1: cvl_0_1 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:14:21.845 00:14:21.845 --- 10.0.0.2 ping statistics --- 00:14:21.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.845 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:14:21.845 00:14:21.845 --- 10.0.0.1 ping statistics --- 00:14:21.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.845 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1726099 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1726099 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1726099 ']' 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.845 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.846 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.846 16:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.846 [2024-05-15 16:35:28.907358] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:14:21.846 [2024-05-15 16:35:28.907447] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.846 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.846 [2024-05-15 16:35:28.986406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:22.104 [2024-05-15 16:35:29.072951] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.104 [2024-05-15 16:35:29.073005] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.104 [2024-05-15 16:35:29.073026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.104 [2024-05-15 16:35:29.073040] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.104 [2024-05-15 16:35:29.073051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.104 [2024-05-15 16:35:29.073142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.104 [2024-05-15 16:35:29.073147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 [2024-05-15 16:35:29.217022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 [2024-05-15 16:35:29.233143] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:22.104 [2024-05-15 16:35:29.233448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 NULL1 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 Delay0 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1726124 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:22.104 16:35:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:22.104 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.104 [2024-05-15 16:35:29.307961] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:24.628 16:35:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.628 16:35:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.628 16:35:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 [2024-05-15 16:35:31.437950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239180 is same with the state(5) to be set 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 starting I/O failed: -6 00:14:24.628 [2024-05-15 16:35:31.438831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05c8000c00 is same with the state(5) to be set 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.628 Write completed with error (sct=0, sc=8) 00:14:24.628 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Write completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:24.629 Read completed with error (sct=0, sc=8) 00:14:25.226 [2024-05-15 16:35:32.407734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c8b0 is same with the state(5) to be set 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 [2024-05-15 16:35:32.439492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239360 is same with the state(5) to be set 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 [2024-05-15 16:35:32.439706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05c800bfe0 is same with the state(5) to be set 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Read completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.484 Write completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 [2024-05-15 16:35:32.439867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05c800c600 is same with the state(5) to be set 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Write completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 Read completed with error (sct=0, sc=8) 00:14:25.485 [2024-05-15 16:35:32.440556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239980 is same with the state(5) to be set 00:14:25.485 Initializing NVMe Controllers 00:14:25.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.485 Controller IO queue size 128, less than required. 00:14:25.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:25.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:25.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:25.485 Initialization complete. Launching workers. 00:14:25.485 ======================================================== 00:14:25.485 Latency(us) 00:14:25.485 Device Information : IOPS MiB/s Average min max 00:14:25.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.36 0.08 938246.99 774.77 2001955.54 00:14:25.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.84 0.08 923793.78 388.93 2001964.66 00:14:25.485 ======================================================== 00:14:25.485 Total : 326.20 0.16 930987.39 388.93 2001964.66 00:14:25.485 00:14:25.485 [2024-05-15 16:35:32.441555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223c8b0 (9): Bad file descriptor 00:14:25.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:25.485 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.485 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:25.485 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1726124 00:14:25.485 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1726124 00:14:25.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1726124) - No such process 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1726124 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1726124 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1726124 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.743 [2024-05-15 16:35:32.959071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.743 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:26.001 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.001 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1726535 00:14:26.001 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:26.001 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:26.001 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:26.001 16:35:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.001 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.001 [2024-05-15 16:35:33.010097] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:26.258 16:35:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.258 16:35:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:26.258 16:35:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.822 16:35:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.822 16:35:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:26.822 16:35:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.386 16:35:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.386 16:35:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:27.386 16:35:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.952 16:35:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.952 16:35:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:27.952 16:35:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.517 16:35:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.517 16:35:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:28.517 16:35:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.774 16:35:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.774 16:35:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:28.774 16:35:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.032 Initializing NVMe Controllers 00:14:29.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.032 Controller IO queue size 128, less than required. 00:14:29.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:29.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:29.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:29.032 Initialization complete. Launching workers. 00:14:29.032 ======================================================== 00:14:29.032 Latency(us) 00:14:29.032 Device Information : IOPS MiB/s Average min max 00:14:29.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003593.11 1000203.63 1041320.05 00:14:29.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005062.73 1000229.82 1013099.59 00:14:29.032 ======================================================== 00:14:29.032 Total : 256.00 0.12 1004327.92 1000203.63 1041320.05 00:14:29.032 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1726535 00:14:29.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1726535) - No such process 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1726535 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.289 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.289 rmmod nvme_tcp 00:14:29.289 rmmod nvme_fabrics 00:14:29.547 rmmod nvme_keyring 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1726099 ']' 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1726099 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1726099 ']' 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1726099 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1726099 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1726099' 00:14:29.547 killing process with pid 1726099 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1726099 00:14:29.547 [2024-05-15 16:35:36.575778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:29.547 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1726099 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.805 16:35:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.706 16:35:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.706 00:14:31.706 real 0m12.586s 00:14:31.706 user 0m27.761s 00:14:31.706 sys 0m3.207s 00:14:31.706 16:35:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.706 16:35:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.706 ************************************ 00:14:31.706 END TEST nvmf_delete_subsystem 00:14:31.706 ************************************ 00:14:31.706 16:35:38 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:31.706 16:35:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:31.706 16:35:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.706 16:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.706 ************************************ 00:14:31.706 START TEST nvmf_ns_masking 00:14:31.706 ************************************ 00:14:31.706 16:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:31.964 * Looking for test storage... 00:14:31.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.964 16:35:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=6b79bb17-d8d9-4ce1-b1fd-c985a1cbd06d 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.965 16:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:34.493 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:34.493 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.493 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:34.494 Found net devices under 0000:09:00.0: cvl_0_0 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:34.494 Found net devices under 0000:09:00.1: cvl_0_1 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:34.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:14:34.494 00:14:34.494 --- 10.0.0.2 ping statistics --- 00:14:34.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.494 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:14:34.494 00:14:34.494 --- 10.0.0.1 ping statistics --- 00:14:34.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.494 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1729291 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1729291 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1729291 ']' 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:34.494 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:34.494 [2024-05-15 16:35:41.534159] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:14:34.494 [2024-05-15 16:35:41.534262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.494 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.494 [2024-05-15 16:35:41.614001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.494 [2024-05-15 16:35:41.705675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.494 [2024-05-15 16:35:41.705732] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.494 [2024-05-15 16:35:41.705749] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.494 [2024-05-15 16:35:41.705762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.494 [2024-05-15 16:35:41.705774] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.494 [2024-05-15 16:35:41.705832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.494 [2024-05-15 16:35:41.705884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.494 [2024-05-15 16:35:41.706000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.494 [2024-05-15 16:35:41.706003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.752 16:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:35.009 [2024-05-15 16:35:42.078611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.009 16:35:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:35.009 16:35:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:35.009 16:35:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:35.266 Malloc1 00:14:35.266 16:35:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:35.525 Malloc2 00:14:35.525 16:35:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:35.783 16:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:36.041 16:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.298 [2024-05-15 16:35:43.483490] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:36.298 [2024-05-15 16:35:43.483844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.298 16:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:36.298 16:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6b79bb17-d8d9-4ce1-b1fd-c985a1cbd06d -a 10.0.0.2 -s 4420 -i 4 00:14:36.555 16:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.555 16:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:36.555 16:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.555 16:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:36.555 16:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:39.081 [ 0]:0x1 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=700a350030ca403d9de48257cf95b081 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 700a350030ca403d9de48257cf95b081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.081 16:35:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:39.081 [ 0]:0x1 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=700a350030ca403d9de48257cf95b081 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 700a350030ca403d9de48257cf95b081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:39.081 [ 1]:0x2 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.081 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.338 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:39.594 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:39.594 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6b79bb17-d8d9-4ce1-b1fd-c985a1cbd06d -a 10.0.0.2 -s 4420 -i 4 00:14:39.850 16:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:39.850 16:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:39.850 16:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.850 16:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:39.850 16:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:39.851 16:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:41.746 [ 0]:0x2 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.746 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.004 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:42.004 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.004 16:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:42.261 [ 0]:0x1 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=700a350030ca403d9de48257cf95b081 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 700a350030ca403d9de48257cf95b081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:42.261 [ 1]:0x2 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.261 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:42.519 [ 0]:0x2 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.519 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:42.776 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:42.777 16:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6b79bb17-d8d9-4ce1-b1fd-c985a1cbd06d -a 10.0.0.2 -s 4420 -i 4 00:14:43.034 16:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:43.034 16:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:43.034 16:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.034 16:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:43.034 16:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:43.034 16:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:44.983 [ 0]:0x1 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=700a350030ca403d9de48257cf95b081 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 700a350030ca403d9de48257cf95b081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:44.983 [ 1]:0x2 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.983 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.241 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:45.499 [ 0]:0x2 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:45.499 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:45.767 [2024-05-15 16:35:52.805635] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:45.767 request: 00:14:45.767 { 00:14:45.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.767 "nsid": 2, 00:14:45.767 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.767 "method": "nvmf_ns_remove_host", 00:14:45.767 "req_id": 1 00:14:45.767 } 00:14:45.767 Got JSON-RPC error response 00:14:45.767 response: 00:14:45.767 { 00:14:45.767 "code": -32602, 00:14:45.767 "message": "Invalid parameters" 00:14:45.767 } 00:14:45.767 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:45.767 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:45.768 [ 0]:0x2 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=91755f53f7f04608a7fe24046c46e7ac 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 91755f53f7f04608a7fe24046c46e7ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:45.768 16:35:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.026 16:35:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.284 rmmod nvme_tcp 00:14:46.284 rmmod nvme_fabrics 00:14:46.284 rmmod nvme_keyring 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1729291 ']' 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1729291 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1729291 ']' 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1729291 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1729291 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1729291' 00:14:46.284 killing process with pid 1729291 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1729291 00:14:46.284 [2024-05-15 16:35:53.398412] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:46.284 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1729291 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.542 16:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.076 16:35:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.076 00:14:49.076 real 0m16.825s 00:14:49.076 user 0m51.246s 00:14:49.076 sys 0m4.023s 00:14:49.076 16:35:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.076 16:35:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.076 ************************************ 00:14:49.076 END TEST nvmf_ns_masking 00:14:49.076 ************************************ 00:14:49.076 16:35:55 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:49.076 16:35:55 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:49.076 16:35:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.076 16:35:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.076 16:35:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.076 ************************************ 00:14:49.076 START TEST nvmf_nvme_cli 00:14:49.076 ************************************ 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:49.076 * Looking for test storage... 00:14:49.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.076 16:35:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.599 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:51.600 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:51.600 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:51.600 Found net devices under 0000:09:00.0: cvl_0_0 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:51.600 Found net devices under 0000:09:00.1: cvl_0_1 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:14:51.600 00:14:51.600 --- 10.0.0.2 ping statistics --- 00:14:51.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.600 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:51.600 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:14:51.600 00:14:51.600 --- 10.0.0.1 ping statistics --- 00:14:51.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.601 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1733128 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1733128 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1733128 ']' 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 [2024-05-15 16:35:58.417510] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:14:51.601 [2024-05-15 16:35:58.417610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.601 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.601 [2024-05-15 16:35:58.492756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.601 [2024-05-15 16:35:58.578793] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.601 [2024-05-15 16:35:58.578854] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.601 [2024-05-15 16:35:58.578881] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.601 [2024-05-15 16:35:58.578893] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.601 [2024-05-15 16:35:58.578902] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.601 [2024-05-15 16:35:58.579060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.601 [2024-05-15 16:35:58.579126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.601 [2024-05-15 16:35:58.579176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.601 [2024-05-15 16:35:58.579179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 [2024-05-15 16:35:58.733966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 Malloc0 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 Malloc1 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.601 [2024-05-15 16:35:58.819529] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:51.601 [2024-05-15 16:35:58.819864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.601 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:51.859 00:14:51.859 Discovery Log Number of Records 2, Generation counter 2 00:14:51.859 =====Discovery Log Entry 0====== 00:14:51.859 trtype: tcp 00:14:51.859 adrfam: ipv4 00:14:51.859 subtype: current discovery subsystem 00:14:51.859 treq: not required 00:14:51.859 portid: 0 00:14:51.859 trsvcid: 4420 00:14:51.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:51.859 traddr: 10.0.0.2 00:14:51.859 eflags: explicit discovery connections, duplicate discovery information 00:14:51.859 sectype: none 00:14:51.859 =====Discovery Log Entry 1====== 00:14:51.859 trtype: tcp 00:14:51.859 adrfam: ipv4 00:14:51.859 subtype: nvme subsystem 00:14:51.859 treq: not required 00:14:51.859 portid: 0 00:14:51.859 trsvcid: 4420 00:14:51.859 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:51.859 traddr: 10.0.0.2 00:14:51.859 eflags: none 00:14:51.859 sectype: none 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:51.859 16:35:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:52.424 16:35:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:52.424 16:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:52.424 16:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.424 16:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:52.424 16:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:52.424 16:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.319 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.576 16:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:54.576 /dev/nvme0n1 ]] 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:54.577 16:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:54.834 16:36:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.835 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.835 rmmod nvme_tcp 00:14:54.835 rmmod nvme_fabrics 00:14:55.092 rmmod nvme_keyring 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1733128 ']' 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1733128 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1733128 ']' 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1733128 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1733128 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1733128' 00:14:55.092 killing process with pid 1733128 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1733128 00:14:55.092 [2024-05-15 16:36:02.098251] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:55.092 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1733128 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.350 16:36:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.248 16:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.248 00:14:57.248 real 0m8.643s 00:14:57.248 user 0m15.597s 00:14:57.248 sys 0m2.443s 00:14:57.248 16:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:57.248 16:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.248 ************************************ 00:14:57.248 END TEST nvmf_nvme_cli 00:14:57.248 ************************************ 00:14:57.248 16:36:04 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:57.248 16:36:04 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:57.248 16:36:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:57.248 16:36:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:57.248 16:36:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 ************************************ 00:14:57.507 START TEST nvmf_vfio_user 00:14:57.507 ************************************ 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:57.507 * Looking for test storage... 00:14:57.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1734044 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1734044' 00:14:57.507 Process pid: 1734044 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1734044 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1734044 ']' 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:57.507 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 [2024-05-15 16:36:04.601345] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:14:57.507 [2024-05-15 16:36:04.601439] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.507 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.507 [2024-05-15 16:36:04.670640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.765 [2024-05-15 16:36:04.756404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.765 [2024-05-15 16:36:04.756454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.765 [2024-05-15 16:36:04.756486] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.765 [2024-05-15 16:36:04.756498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.765 [2024-05-15 16:36:04.756509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.765 [2024-05-15 16:36:04.756636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.765 [2024-05-15 16:36:04.756702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.765 [2024-05-15 16:36:04.756750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.765 [2024-05-15 16:36:04.756752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.765 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.765 16:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:57.765 16:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.709 16:36:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:58.967 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.967 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.967 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.967 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.967 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.224 Malloc1 00:14:59.224 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:59.788 16:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.788 16:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:00.045 [2024-05-15 16:36:07.254332] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:00.302 16:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.302 16:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:00.302 16:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:00.560 Malloc2 00:15:00.560 16:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:00.817 16:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:01.073 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:01.333 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:01.333 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:01.333 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.333 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:01.333 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:01.333 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:01.333 [2024-05-15 16:36:08.326206] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:15:01.333 [2024-05-15 16:36:08.326347] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734857 ] 00:15:01.333 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.333 [2024-05-15 16:36:08.361641] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:01.333 [2024-05-15 16:36:08.369726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.333 [2024-05-15 16:36:08.369755] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f81c4a11000 00:15:01.333 [2024-05-15 16:36:08.370727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.371716] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.372722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.373725] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.374736] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.375742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.376745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.377750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.333 [2024-05-15 16:36:08.378761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.333 [2024-05-15 16:36:08.378783] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f81c37c7000 00:15:01.333 [2024-05-15 16:36:08.380167] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.333 [2024-05-15 16:36:08.401682] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:01.333 [2024-05-15 16:36:08.401717] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:01.333 [2024-05-15 16:36:08.403896] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:01.333 [2024-05-15 16:36:08.403958] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:01.333 [2024-05-15 16:36:08.404045] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:01.333 [2024-05-15 16:36:08.404072] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:01.333 [2024-05-15 16:36:08.404082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:01.333 [2024-05-15 16:36:08.404888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:01.333 [2024-05-15 16:36:08.404907] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:01.333 [2024-05-15 16:36:08.404919] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:01.333 [2024-05-15 16:36:08.405894] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:01.333 [2024-05-15 16:36:08.405913] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:01.333 [2024-05-15 16:36:08.405926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.333 [2024-05-15 16:36:08.406896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:01.333 [2024-05-15 16:36:08.406913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.333 [2024-05-15 16:36:08.407901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:01.333 [2024-05-15 16:36:08.407919] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:01.333 [2024-05-15 16:36:08.407928] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:01.334 [2024-05-15 16:36:08.407939] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.334 [2024-05-15 16:36:08.408048] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:01.334 [2024-05-15 16:36:08.408057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.334 [2024-05-15 16:36:08.408065] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:01.334 [2024-05-15 16:36:08.408910] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:01.334 [2024-05-15 16:36:08.409915] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:01.334 [2024-05-15 16:36:08.410918] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:01.334 [2024-05-15 16:36:08.411917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.334 [2024-05-15 16:36:08.412023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.334 [2024-05-15 16:36:08.412928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:01.334 [2024-05-15 16:36:08.412946] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.334 [2024-05-15 16:36:08.412955] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.412979] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:01.334 [2024-05-15 16:36:08.412993] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413019] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.334 [2024-05-15 16:36:08.413029] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.334 [2024-05-15 16:36:08.413048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.334 [2024-05-15 16:36:08.413108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:01.334 [2024-05-15 16:36:08.413122] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:01.334 [2024-05-15 16:36:08.413131] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:01.334 [2024-05-15 16:36:08.413138] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:01.334 [2024-05-15 16:36:08.413146] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:01.334 [2024-05-15 16:36:08.413158] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:01.334 [2024-05-15 16:36:08.413166] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:01.334 [2024-05-15 16:36:08.413174] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:01.334 [2024-05-15 16:36:08.413243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:01.334 [2024-05-15 16:36:08.413260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.334 [2024-05-15 16:36:08.413284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.334 [2024-05-15 16:36:08.413296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.334 [2024-05-15 16:36:08.413308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.334 [2024-05-15 16:36:08.413317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413333] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:01.334 [2024-05-15 16:36:08.413363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:01.334 [2024-05-15 16:36:08.413374] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:01.334 [2024-05-15 16:36:08.413383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413394] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413404] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.334 [2024-05-15 16:36:08.413429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:01.334 [2024-05-15 16:36:08.413485] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413501] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413529] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:01.334 [2024-05-15 16:36:08.413538] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:01.334 [2024-05-15 16:36:08.413548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:01.334 [2024-05-15 16:36:08.413562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:01.334 [2024-05-15 16:36:08.413577] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:01.334 [2024-05-15 16:36:08.413602] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413616] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413627] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.334 [2024-05-15 16:36:08.413636] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.334 [2024-05-15 16:36:08.413645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.334 [2024-05-15 16:36:08.413667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:01.334 [2024-05-15 16:36:08.413687] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.334 [2024-05-15 16:36:08.413713] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.335 [2024-05-15 16:36:08.413721] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.335 [2024-05-15 16:36:08.413731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.413746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.413760] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.335 [2024-05-15 16:36:08.413772] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:01.335 [2024-05-15 16:36:08.413785] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:01.335 [2024-05-15 16:36:08.413795] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.335 [2024-05-15 16:36:08.413803] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:01.335 [2024-05-15 16:36:08.413813] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.335 [2024-05-15 16:36:08.413821] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:01.335 [2024-05-15 16:36:08.413830] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:01.335 [2024-05-15 16:36:08.413860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.413878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.413897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.413909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.413925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.413941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.413957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.413969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.413987] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:01.335 [2024-05-15 16:36:08.413996] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:01.335 [2024-05-15 16:36:08.414003] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:01.335 [2024-05-15 16:36:08.414009] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:01.335 [2024-05-15 16:36:08.414019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:01.335 [2024-05-15 16:36:08.414031] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:01.335 [2024-05-15 16:36:08.414039] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:01.335 [2024-05-15 16:36:08.414049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.414060] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:01.335 [2024-05-15 16:36:08.414067] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.335 [2024-05-15 16:36:08.414080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.414093] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:01.335 [2024-05-15 16:36:08.414102] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:01.335 [2024-05-15 16:36:08.414111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:01.335 [2024-05-15 16:36:08.414123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.414144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.414160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:01.335 [2024-05-15 16:36:08.414176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:01.335 ===================================================== 00:15:01.335 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.335 ===================================================== 00:15:01.335 Controller Capabilities/Features 00:15:01.335 ================================ 00:15:01.335 Vendor ID: 4e58 00:15:01.335 Subsystem Vendor ID: 4e58 00:15:01.335 Serial Number: SPDK1 00:15:01.335 Model Number: SPDK bdev Controller 00:15:01.335 Firmware Version: 24.05 00:15:01.335 Recommended Arb Burst: 6 00:15:01.335 IEEE OUI Identifier: 8d 6b 50 00:15:01.335 Multi-path I/O 00:15:01.335 May have multiple subsystem ports: Yes 00:15:01.335 May have multiple controllers: Yes 00:15:01.335 Associated with SR-IOV VF: No 00:15:01.335 Max Data Transfer Size: 131072 00:15:01.335 Max Number of Namespaces: 32 00:15:01.335 Max Number of I/O Queues: 127 00:15:01.335 NVMe Specification Version (VS): 1.3 00:15:01.335 NVMe Specification Version (Identify): 1.3 00:15:01.335 Maximum Queue Entries: 256 00:15:01.335 Contiguous Queues Required: Yes 00:15:01.335 Arbitration Mechanisms Supported 00:15:01.335 Weighted Round Robin: Not Supported 00:15:01.335 Vendor Specific: Not Supported 00:15:01.335 Reset Timeout: 15000 ms 00:15:01.335 Doorbell Stride: 4 bytes 00:15:01.335 NVM Subsystem Reset: Not Supported 00:15:01.335 Command Sets Supported 00:15:01.335 NVM Command Set: Supported 00:15:01.335 Boot Partition: Not Supported 00:15:01.335 Memory Page Size Minimum: 4096 bytes 00:15:01.335 Memory Page Size Maximum: 4096 bytes 00:15:01.335 Persistent Memory Region: Not Supported 00:15:01.335 Optional Asynchronous Events Supported 00:15:01.335 Namespace Attribute Notices: Supported 00:15:01.335 Firmware Activation Notices: Not Supported 00:15:01.335 ANA Change Notices: Not Supported 00:15:01.335 PLE Aggregate Log Change Notices: Not Supported 00:15:01.335 LBA Status Info Alert Notices: Not Supported 00:15:01.335 EGE Aggregate Log Change Notices: Not Supported 00:15:01.335 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.335 Zone Descriptor Change Notices: Not Supported 00:15:01.335 Discovery Log Change Notices: Not Supported 00:15:01.335 Controller Attributes 00:15:01.335 128-bit Host Identifier: Supported 00:15:01.336 Non-Operational Permissive Mode: Not Supported 00:15:01.336 NVM Sets: Not Supported 00:15:01.336 Read Recovery Levels: Not Supported 00:15:01.336 Endurance Groups: Not Supported 00:15:01.336 Predictable Latency Mode: Not Supported 00:15:01.336 Traffic Based Keep ALive: Not Supported 00:15:01.336 Namespace Granularity: Not Supported 00:15:01.336 SQ Associations: Not Supported 00:15:01.336 UUID List: Not Supported 00:15:01.336 Multi-Domain Subsystem: Not Supported 00:15:01.336 Fixed Capacity Management: Not Supported 00:15:01.336 Variable Capacity Management: Not Supported 00:15:01.336 Delete Endurance Group: Not Supported 00:15:01.336 Delete NVM Set: Not Supported 00:15:01.336 Extended LBA Formats Supported: Not Supported 00:15:01.336 Flexible Data Placement Supported: Not Supported 00:15:01.336 00:15:01.336 Controller Memory Buffer Support 00:15:01.336 ================================ 00:15:01.336 Supported: No 00:15:01.336 00:15:01.336 Persistent Memory Region Support 00:15:01.336 ================================ 00:15:01.336 Supported: No 00:15:01.336 00:15:01.336 Admin Command Set Attributes 00:15:01.336 ============================ 00:15:01.336 Security Send/Receive: Not Supported 00:15:01.336 Format NVM: Not Supported 00:15:01.336 Firmware Activate/Download: Not Supported 00:15:01.336 Namespace Management: Not Supported 00:15:01.336 Device Self-Test: Not Supported 00:15:01.336 Directives: Not Supported 00:15:01.336 NVMe-MI: Not Supported 00:15:01.336 Virtualization Management: Not Supported 00:15:01.336 Doorbell Buffer Config: Not Supported 00:15:01.336 Get LBA Status Capability: Not Supported 00:15:01.336 Command & Feature Lockdown Capability: Not Supported 00:15:01.336 Abort Command Limit: 4 00:15:01.336 Async Event Request Limit: 4 00:15:01.336 Number of Firmware Slots: N/A 00:15:01.336 Firmware Slot 1 Read-Only: N/A 00:15:01.336 Firmware Activation Without Reset: N/A 00:15:01.336 Multiple Update Detection Support: N/A 00:15:01.336 Firmware Update Granularity: No Information Provided 00:15:01.336 Per-Namespace SMART Log: No 00:15:01.336 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.336 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:01.336 Command Effects Log Page: Supported 00:15:01.336 Get Log Page Extended Data: Supported 00:15:01.336 Telemetry Log Pages: Not Supported 00:15:01.336 Persistent Event Log Pages: Not Supported 00:15:01.336 Supported Log Pages Log Page: May Support 00:15:01.336 Commands Supported & Effects Log Page: Not Supported 00:15:01.336 Feature Identifiers & Effects Log Page:May Support 00:15:01.336 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.336 Data Area 4 for Telemetry Log: Not Supported 00:15:01.336 Error Log Page Entries Supported: 128 00:15:01.336 Keep Alive: Supported 00:15:01.336 Keep Alive Granularity: 10000 ms 00:15:01.336 00:15:01.336 NVM Command Set Attributes 00:15:01.336 ========================== 00:15:01.336 Submission Queue Entry Size 00:15:01.336 Max: 64 00:15:01.336 Min: 64 00:15:01.336 Completion Queue Entry Size 00:15:01.336 Max: 16 00:15:01.336 Min: 16 00:15:01.336 Number of Namespaces: 32 00:15:01.336 Compare Command: Supported 00:15:01.336 Write Uncorrectable Command: Not Supported 00:15:01.336 Dataset Management Command: Supported 00:15:01.336 Write Zeroes Command: Supported 00:15:01.336 Set Features Save Field: Not Supported 00:15:01.336 Reservations: Not Supported 00:15:01.336 Timestamp: Not Supported 00:15:01.336 Copy: Supported 00:15:01.336 Volatile Write Cache: Present 00:15:01.336 Atomic Write Unit (Normal): 1 00:15:01.336 Atomic Write Unit (PFail): 1 00:15:01.336 Atomic Compare & Write Unit: 1 00:15:01.336 Fused Compare & Write: Supported 00:15:01.336 Scatter-Gather List 00:15:01.336 SGL Command Set: Supported (Dword aligned) 00:15:01.336 SGL Keyed: Not Supported 00:15:01.336 SGL Bit Bucket Descriptor: Not Supported 00:15:01.336 SGL Metadata Pointer: Not Supported 00:15:01.336 Oversized SGL: Not Supported 00:15:01.336 SGL Metadata Address: Not Supported 00:15:01.336 SGL Offset: Not Supported 00:15:01.336 Transport SGL Data Block: Not Supported 00:15:01.336 Replay Protected Memory Block: Not Supported 00:15:01.336 00:15:01.336 Firmware Slot Information 00:15:01.336 ========================= 00:15:01.336 Active slot: 1 00:15:01.336 Slot 1 Firmware Revision: 24.05 00:15:01.336 00:15:01.336 00:15:01.336 Commands Supported and Effects 00:15:01.336 ============================== 00:15:01.336 Admin Commands 00:15:01.336 -------------- 00:15:01.336 Get Log Page (02h): Supported 00:15:01.336 Identify (06h): Supported 00:15:01.336 Abort (08h): Supported 00:15:01.336 Set Features (09h): Supported 00:15:01.336 Get Features (0Ah): Supported 00:15:01.336 Asynchronous Event Request (0Ch): Supported 00:15:01.336 Keep Alive (18h): Supported 00:15:01.336 I/O Commands 00:15:01.336 ------------ 00:15:01.336 Flush (00h): Supported LBA-Change 00:15:01.336 Write (01h): Supported LBA-Change 00:15:01.336 Read (02h): Supported 00:15:01.336 Compare (05h): Supported 00:15:01.336 Write Zeroes (08h): Supported LBA-Change 00:15:01.336 Dataset Management (09h): Supported LBA-Change 00:15:01.336 Copy (19h): Supported LBA-Change 00:15:01.336 Unknown (79h): Supported LBA-Change 00:15:01.336 Unknown (7Ah): Supported 00:15:01.336 00:15:01.336 Error Log 00:15:01.336 ========= 00:15:01.336 00:15:01.336 Arbitration 00:15:01.336 =========== 00:15:01.336 Arbitration Burst: 1 00:15:01.337 00:15:01.337 Power Management 00:15:01.337 ================ 00:15:01.337 Number of Power States: 1 00:15:01.337 Current Power State: Power State #0 00:15:01.337 Power State #0: 00:15:01.337 Max Power: 0.00 W 00:15:01.337 Non-Operational State: Operational 00:15:01.337 Entry Latency: Not Reported 00:15:01.337 Exit Latency: Not Reported 00:15:01.337 Relative Read Throughput: 0 00:15:01.337 Relative Read Latency: 0 00:15:01.337 Relative Write Throughput: 0 00:15:01.337 Relative Write Latency: 0 00:15:01.337 Idle Power: Not Reported 00:15:01.337 Active Power: Not Reported 00:15:01.337 Non-Operational Permissive Mode: Not Supported 00:15:01.337 00:15:01.337 Health Information 00:15:01.337 ================== 00:15:01.337 Critical Warnings: 00:15:01.337 Available Spare Space: OK 00:15:01.337 Temperature: OK 00:15:01.337 Device Reliability: OK 00:15:01.337 Read Only: No 00:15:01.337 Volatile Memory Backup: OK 00:15:01.337 Current Temperature: 0 Kelvin (-2[2024-05-15 16:36:08.414337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:01.337 [2024-05-15 16:36:08.414355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:01.337 [2024-05-15 16:36:08.414393] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:01.337 [2024-05-15 16:36:08.414410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.337 [2024-05-15 16:36:08.414421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.337 [2024-05-15 16:36:08.414432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.337 [2024-05-15 16:36:08.414442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.337 [2024-05-15 16:36:08.417227] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:01.337 [2024-05-15 16:36:08.417249] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:01.337 [2024-05-15 16:36:08.417951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.337 [2024-05-15 16:36:08.418037] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:01.337 [2024-05-15 16:36:08.418051] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:01.337 [2024-05-15 16:36:08.418961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:01.337 [2024-05-15 16:36:08.418984] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:01.337 [2024-05-15 16:36:08.419036] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:01.337 [2024-05-15 16:36:08.420998] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.337 73 Celsius) 00:15:01.337 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.337 Available Spare: 0% 00:15:01.337 Available Spare Threshold: 0% 00:15:01.337 Life Percentage Used: 0% 00:15:01.337 Data Units Read: 0 00:15:01.337 Data Units Written: 0 00:15:01.337 Host Read Commands: 0 00:15:01.337 Host Write Commands: 0 00:15:01.337 Controller Busy Time: 0 minutes 00:15:01.337 Power Cycles: 0 00:15:01.337 Power On Hours: 0 hours 00:15:01.337 Unsafe Shutdowns: 0 00:15:01.337 Unrecoverable Media Errors: 0 00:15:01.337 Lifetime Error Log Entries: 0 00:15:01.337 Warning Temperature Time: 0 minutes 00:15:01.337 Critical Temperature Time: 0 minutes 00:15:01.337 00:15:01.337 Number of Queues 00:15:01.337 ================ 00:15:01.337 Number of I/O Submission Queues: 127 00:15:01.337 Number of I/O Completion Queues: 127 00:15:01.337 00:15:01.337 Active Namespaces 00:15:01.337 ================= 00:15:01.337 Namespace ID:1 00:15:01.337 Error Recovery Timeout: Unlimited 00:15:01.337 Command Set Identifier: NVM (00h) 00:15:01.337 Deallocate: Supported 00:15:01.337 Deallocated/Unwritten Error: Not Supported 00:15:01.337 Deallocated Read Value: Unknown 00:15:01.337 Deallocate in Write Zeroes: Not Supported 00:15:01.337 Deallocated Guard Field: 0xFFFF 00:15:01.337 Flush: Supported 00:15:01.337 Reservation: Supported 00:15:01.337 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.337 Size (in LBAs): 131072 (0GiB) 00:15:01.337 Capacity (in LBAs): 131072 (0GiB) 00:15:01.337 Utilization (in LBAs): 131072 (0GiB) 00:15:01.337 NGUID: 3523E715EBA5459D9C872FF958EAC551 00:15:01.337 UUID: 3523e715-eba5-459d-9c87-2ff958eac551 00:15:01.337 Thin Provisioning: Not Supported 00:15:01.337 Per-NS Atomic Units: Yes 00:15:01.337 Atomic Boundary Size (Normal): 0 00:15:01.337 Atomic Boundary Size (PFail): 0 00:15:01.337 Atomic Boundary Offset: 0 00:15:01.337 Maximum Single Source Range Length: 65535 00:15:01.337 Maximum Copy Length: 65535 00:15:01.337 Maximum Source Range Count: 1 00:15:01.337 NGUID/EUI64 Never Reused: No 00:15:01.337 Namespace Write Protected: No 00:15:01.337 Number of LBA Formats: 1 00:15:01.337 Current LBA Format: LBA Format #00 00:15:01.337 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.337 00:15:01.337 16:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:01.337 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.594 [2024-05-15 16:36:08.651070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.891 Initializing NVMe Controllers 00:15:06.891 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:06.891 Initialization complete. Launching workers. 00:15:06.891 ======================================================== 00:15:06.891 Latency(us) 00:15:06.891 Device Information : IOPS MiB/s Average min max 00:15:06.891 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34527.97 134.87 3708.26 1164.38 9607.87 00:15:06.891 ======================================================== 00:15:06.891 Total : 34527.97 134.87 3708.26 1164.38 9607.87 00:15:06.891 00:15:06.891 [2024-05-15 16:36:13.672731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.891 16:36:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:06.891 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.891 [2024-05-15 16:36:13.908916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.156 Initializing NVMe Controllers 00:15:12.156 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.157 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:12.157 Initialization complete. Launching workers. 00:15:12.157 ======================================================== 00:15:12.157 Latency(us) 00:15:12.157 Device Information : IOPS MiB/s Average min max 00:15:12.157 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16009.29 62.54 8000.62 4973.54 15947.31 00:15:12.157 ======================================================== 00:15:12.157 Total : 16009.29 62.54 8000.62 4973.54 15947.31 00:15:12.157 00:15:12.157 [2024-05-15 16:36:18.948125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.157 16:36:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:12.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.157 [2024-05-15 16:36:19.180297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.420 [2024-05-15 16:36:24.264629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.420 Initializing NVMe Controllers 00:15:17.420 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.420 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.420 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:17.420 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:17.420 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:17.420 Initialization complete. Launching workers. 00:15:17.420 Starting thread on core 2 00:15:17.420 Starting thread on core 3 00:15:17.420 Starting thread on core 1 00:15:17.420 16:36:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:17.420 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.420 [2024-05-15 16:36:24.574670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.706 [2024-05-15 16:36:27.638067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.706 Initializing NVMe Controllers 00:15:20.706 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.706 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.706 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:20.706 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:20.706 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:20.706 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:20.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:20.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:20.706 Initialization complete. Launching workers. 00:15:20.706 Starting thread on core 1 with urgent priority queue 00:15:20.706 Starting thread on core 2 with urgent priority queue 00:15:20.706 Starting thread on core 3 with urgent priority queue 00:15:20.706 Starting thread on core 0 with urgent priority queue 00:15:20.706 SPDK bdev Controller (SPDK1 ) core 0: 5501.00 IO/s 18.18 secs/100000 ios 00:15:20.706 SPDK bdev Controller (SPDK1 ) core 1: 5491.00 IO/s 18.21 secs/100000 ios 00:15:20.706 SPDK bdev Controller (SPDK1 ) core 2: 5652.33 IO/s 17.69 secs/100000 ios 00:15:20.706 SPDK bdev Controller (SPDK1 ) core 3: 5929.00 IO/s 16.87 secs/100000 ios 00:15:20.706 ======================================================== 00:15:20.706 00:15:20.706 16:36:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.706 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.964 [2024-05-15 16:36:27.948735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.964 Initializing NVMe Controllers 00:15:20.964 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.964 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.964 Namespace ID: 1 size: 0GB 00:15:20.964 Initialization complete. 00:15:20.964 INFO: using host memory buffer for IO 00:15:20.964 Hello world! 00:15:20.964 [2024-05-15 16:36:27.983287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.964 16:36:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.964 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.223 [2024-05-15 16:36:28.285949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.158 Initializing NVMe Controllers 00:15:22.158 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.158 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.158 Initialization complete. Launching workers. 00:15:22.158 submit (in ns) avg, min, max = 8805.3, 3553.3, 4015334.4 00:15:22.158 complete (in ns) avg, min, max = 26910.6, 2066.7, 4015881.1 00:15:22.158 00:15:22.158 Submit histogram 00:15:22.158 ================ 00:15:22.158 Range in us Cumulative Count 00:15:22.158 3.532 - 3.556: 0.0076% ( 1) 00:15:22.158 3.556 - 3.579: 0.2043% ( 26) 00:15:22.158 3.579 - 3.603: 4.6304% ( 585) 00:15:22.158 3.603 - 3.627: 12.9833% ( 1104) 00:15:22.158 3.627 - 3.650: 24.5063% ( 1523) 00:15:22.158 3.650 - 3.674: 32.9424% ( 1115) 00:15:22.158 3.674 - 3.698: 39.4416% ( 859) 00:15:22.158 3.698 - 3.721: 45.2902% ( 773) 00:15:22.158 3.721 - 3.745: 50.2610% ( 657) 00:15:22.158 3.745 - 3.769: 54.9444% ( 619) 00:15:22.158 3.769 - 3.793: 58.6139% ( 485) 00:15:22.158 3.793 - 3.816: 61.8143% ( 423) 00:15:22.158 3.816 - 3.840: 65.0072% ( 422) 00:15:22.158 3.840 - 3.864: 69.7511% ( 627) 00:15:22.158 3.864 - 3.887: 74.8884% ( 679) 00:15:22.158 3.887 - 3.911: 79.3448% ( 589) 00:15:22.158 3.911 - 3.935: 82.7873% ( 455) 00:15:22.158 3.935 - 3.959: 84.7242% ( 256) 00:15:22.158 3.959 - 3.982: 86.4795% ( 232) 00:15:22.158 3.982 - 4.006: 88.5224% ( 270) 00:15:22.158 4.006 - 4.030: 90.1339% ( 213) 00:15:22.158 4.030 - 4.053: 91.0872% ( 126) 00:15:22.158 4.053 - 4.077: 92.0633% ( 129) 00:15:22.158 4.077 - 4.101: 93.1982% ( 150) 00:15:22.158 4.101 - 4.124: 94.0834% ( 117) 00:15:22.158 4.124 - 4.148: 94.8475% ( 101) 00:15:22.158 4.148 - 4.172: 95.3620% ( 68) 00:15:22.158 4.172 - 4.196: 95.6949% ( 44) 00:15:22.158 4.196 - 4.219: 95.9976% ( 40) 00:15:22.158 4.219 - 4.243: 96.2775% ( 37) 00:15:22.158 4.243 - 4.267: 96.5348% ( 34) 00:15:22.158 4.267 - 4.290: 96.6785% ( 19) 00:15:22.158 4.290 - 4.314: 96.8147% ( 18) 00:15:22.158 4.314 - 4.338: 96.9282% ( 15) 00:15:22.158 4.338 - 4.361: 97.0341% ( 14) 00:15:22.158 4.361 - 4.385: 97.1098% ( 10) 00:15:22.158 4.385 - 4.409: 97.2233% ( 15) 00:15:22.158 4.409 - 4.433: 97.2611% ( 5) 00:15:22.159 4.433 - 4.456: 97.2914% ( 4) 00:15:22.159 4.456 - 4.480: 97.3292% ( 5) 00:15:22.159 4.480 - 4.504: 97.3443% ( 2) 00:15:22.159 4.527 - 4.551: 97.3746% ( 4) 00:15:22.159 4.551 - 4.575: 97.4049% ( 4) 00:15:22.159 4.575 - 4.599: 97.4124% ( 1) 00:15:22.159 4.599 - 4.622: 97.4427% ( 4) 00:15:22.159 4.622 - 4.646: 97.4578% ( 2) 00:15:22.159 4.670 - 4.693: 97.4805% ( 3) 00:15:22.159 4.693 - 4.717: 97.4881% ( 1) 00:15:22.159 4.717 - 4.741: 97.5032% ( 2) 00:15:22.159 4.741 - 4.764: 97.5108% ( 1) 00:15:22.159 4.764 - 4.788: 97.5183% ( 1) 00:15:22.159 4.788 - 4.812: 97.5562% ( 5) 00:15:22.159 4.812 - 4.836: 97.5864% ( 4) 00:15:22.159 4.836 - 4.859: 97.6167% ( 4) 00:15:22.159 4.859 - 4.883: 97.6318% ( 2) 00:15:22.159 4.883 - 4.907: 97.6470% ( 2) 00:15:22.159 4.907 - 4.930: 97.6848% ( 5) 00:15:22.159 4.930 - 4.954: 97.7075% ( 3) 00:15:22.159 4.954 - 4.978: 97.7529% ( 6) 00:15:22.159 4.978 - 5.001: 97.8437% ( 12) 00:15:22.159 5.001 - 5.025: 97.8815% ( 5) 00:15:22.159 5.025 - 5.049: 97.8891% ( 1) 00:15:22.159 5.049 - 5.073: 97.9269% ( 5) 00:15:22.159 5.073 - 5.096: 97.9420% ( 2) 00:15:22.159 5.096 - 5.120: 97.9723% ( 4) 00:15:22.159 5.120 - 5.144: 97.9874% ( 2) 00:15:22.159 5.144 - 5.167: 98.0253% ( 5) 00:15:22.159 5.167 - 5.191: 98.0328% ( 1) 00:15:22.159 5.191 - 5.215: 98.0555% ( 3) 00:15:22.159 5.215 - 5.239: 98.0782% ( 3) 00:15:22.159 5.239 - 5.262: 98.1161% ( 5) 00:15:22.159 5.262 - 5.286: 98.1388% ( 3) 00:15:22.159 5.286 - 5.310: 98.1463% ( 1) 00:15:22.159 5.310 - 5.333: 98.1766% ( 4) 00:15:22.159 5.333 - 5.357: 98.1842% ( 1) 00:15:22.159 5.381 - 5.404: 98.2069% ( 3) 00:15:22.159 5.428 - 5.452: 98.2144% ( 1) 00:15:22.159 5.452 - 5.476: 98.2220% ( 1) 00:15:22.159 5.476 - 5.499: 98.2296% ( 1) 00:15:22.159 5.523 - 5.547: 98.2371% ( 1) 00:15:22.159 5.547 - 5.570: 98.2523% ( 2) 00:15:22.159 5.570 - 5.594: 98.2598% ( 1) 00:15:22.159 5.689 - 5.713: 98.2674% ( 1) 00:15:22.159 5.950 - 5.973: 98.2749% ( 1) 00:15:22.159 6.021 - 6.044: 98.2825% ( 1) 00:15:22.159 6.044 - 6.068: 98.2901% ( 1) 00:15:22.159 6.068 - 6.116: 98.2976% ( 1) 00:15:22.159 6.163 - 6.210: 98.3128% ( 2) 00:15:22.159 6.210 - 6.258: 98.3279% ( 2) 00:15:22.159 6.447 - 6.495: 98.3355% ( 1) 00:15:22.159 6.542 - 6.590: 98.3506% ( 2) 00:15:22.159 6.590 - 6.637: 98.3582% ( 1) 00:15:22.159 6.637 - 6.684: 98.3657% ( 1) 00:15:22.159 6.779 - 6.827: 98.3733% ( 1) 00:15:22.159 6.827 - 6.874: 98.3809% ( 1) 00:15:22.159 6.874 - 6.921: 98.3960% ( 2) 00:15:22.159 6.921 - 6.969: 98.4036% ( 1) 00:15:22.159 7.016 - 7.064: 98.4111% ( 1) 00:15:22.159 7.111 - 7.159: 98.4187% ( 1) 00:15:22.159 7.159 - 7.206: 98.4414% ( 3) 00:15:22.159 7.206 - 7.253: 98.4490% ( 1) 00:15:22.159 7.253 - 7.301: 98.4565% ( 1) 00:15:22.159 7.348 - 7.396: 98.4641% ( 1) 00:15:22.159 7.396 - 7.443: 98.4717% ( 1) 00:15:22.159 7.443 - 7.490: 98.4792% ( 1) 00:15:22.159 7.490 - 7.538: 98.4944% ( 2) 00:15:22.159 7.538 - 7.585: 98.5171% ( 3) 00:15:22.159 7.585 - 7.633: 98.5398% ( 3) 00:15:22.159 7.680 - 7.727: 98.5549% ( 2) 00:15:22.159 7.727 - 7.775: 98.5625% ( 1) 00:15:22.159 7.775 - 7.822: 98.5852% ( 3) 00:15:22.159 7.917 - 7.964: 98.5927% ( 1) 00:15:22.159 7.964 - 8.012: 98.6003% ( 1) 00:15:22.159 8.012 - 8.059: 98.6079% ( 1) 00:15:22.159 8.059 - 8.107: 98.6154% ( 1) 00:15:22.159 8.107 - 8.154: 98.6230% ( 1) 00:15:22.159 8.154 - 8.201: 98.6381% ( 2) 00:15:22.159 8.201 - 8.249: 98.6532% ( 2) 00:15:22.159 8.249 - 8.296: 98.6608% ( 1) 00:15:22.159 8.391 - 8.439: 98.6835% ( 3) 00:15:22.159 8.486 - 8.533: 98.6911% ( 1) 00:15:22.159 8.581 - 8.628: 98.6986% ( 1) 00:15:22.159 9.007 - 9.055: 98.7062% ( 1) 00:15:22.159 9.055 - 9.102: 98.7138% ( 1) 00:15:22.159 9.150 - 9.197: 98.7213% ( 1) 00:15:22.159 9.529 - 9.576: 98.7289% ( 1) 00:15:22.159 9.624 - 9.671: 98.7365% ( 1) 00:15:22.159 9.671 - 9.719: 98.7440% ( 1) 00:15:22.159 10.003 - 10.050: 98.7516% ( 1) 00:15:22.159 10.098 - 10.145: 98.7592% ( 1) 00:15:22.159 10.240 - 10.287: 98.7667% ( 1) 00:15:22.159 10.335 - 10.382: 98.7743% ( 1) 00:15:22.159 10.572 - 10.619: 98.7819% ( 1) 00:15:22.159 10.667 - 10.714: 98.7894% ( 1) 00:15:22.159 11.520 - 11.567: 98.7970% ( 1) 00:15:22.159 11.804 - 11.852: 98.8046% ( 1) 00:15:22.159 11.947 - 11.994: 98.8121% ( 1) 00:15:22.159 12.326 - 12.421: 98.8197% ( 1) 00:15:22.159 12.610 - 12.705: 98.8273% ( 1) 00:15:22.159 12.895 - 12.990: 98.8348% ( 1) 00:15:22.159 13.179 - 13.274: 98.8424% ( 1) 00:15:22.159 13.274 - 13.369: 98.8575% ( 2) 00:15:22.159 14.033 - 14.127: 98.8651% ( 1) 00:15:22.159 14.127 - 14.222: 98.8727% ( 1) 00:15:22.159 14.222 - 14.317: 98.8802% ( 1) 00:15:22.159 14.507 - 14.601: 98.8878% ( 1) 00:15:22.159 16.972 - 17.067: 98.8954% ( 1) 00:15:22.159 17.161 - 17.256: 98.9181% ( 3) 00:15:22.159 17.256 - 17.351: 98.9256% ( 1) 00:15:22.159 17.351 - 17.446: 98.9332% ( 1) 00:15:22.159 17.446 - 17.541: 98.9710% ( 5) 00:15:22.159 17.541 - 17.636: 99.0013% ( 4) 00:15:22.159 17.636 - 17.730: 99.0316% ( 4) 00:15:22.159 17.730 - 17.825: 99.0618% ( 4) 00:15:22.159 17.825 - 17.920: 99.1148% ( 7) 00:15:22.159 17.920 - 18.015: 99.1677% ( 7) 00:15:22.159 18.015 - 18.110: 99.2283% ( 8) 00:15:22.159 18.110 - 18.204: 99.2510% ( 3) 00:15:22.159 18.204 - 18.299: 99.3342% ( 11) 00:15:22.159 18.299 - 18.394: 99.3645% ( 4) 00:15:22.159 18.394 - 18.489: 99.4401% ( 10) 00:15:22.159 18.489 - 18.584: 99.5536% ( 15) 00:15:22.159 18.584 - 18.679: 99.6066% ( 7) 00:15:22.159 18.679 - 18.773: 99.6520% ( 6) 00:15:22.159 18.773 - 18.868: 99.6898% ( 5) 00:15:22.159 18.868 - 18.963: 99.7125% ( 3) 00:15:22.159 18.963 - 19.058: 99.7503% ( 5) 00:15:22.159 19.058 - 19.153: 99.7806% ( 4) 00:15:22.159 19.153 - 19.247: 99.8033% ( 3) 00:15:22.159 19.247 - 19.342: 99.8335% ( 4) 00:15:22.159 19.437 - 19.532: 99.8411% ( 1) 00:15:22.159 19.532 - 19.627: 99.8487% ( 1) 00:15:22.159 20.290 - 20.385: 99.8562% ( 1) 00:15:22.159 20.764 - 20.859: 99.8638% ( 1) 00:15:22.160 21.713 - 21.807: 99.8714% ( 1) 00:15:22.160 24.652 - 24.841: 99.8789% ( 1) 00:15:22.160 3980.705 - 4004.978: 99.9924% ( 15) 00:15:22.160 4004.978 - 4029.250: 100.0000% ( 1) 00:15:22.160 00:15:22.160 Complete histogram 00:15:22.160 ================== 00:15:22.160 Range in us Cumulative Count 00:15:22.160 2.062 - 2.074: 1.0214% ( 135) 00:15:22.160 2.074 - 2.086: 19.7397% ( 2474) 00:15:22.160 2.086 - 2.098: 25.6564% ( 782) 00:15:22.160 2.098 - 2.110: 32.3750% ( 888) 00:15:22.160 2.110 - 2.121: 52.1071% ( 2608) 00:15:22.160 2.121 - 2.133: 55.1335% ( 400) 00:15:22.160 2.133 - 2.145: 59.0754% ( 521) 00:15:22.160 2.145 - 2.157: 66.3918% ( 967) 00:15:22.160 2.157 - 2.169: 67.6553% ( 167) 00:15:22.160 2.169 - 2.181: 72.1192% ( 590) 00:15:22.160 2.181 - 2.193: 78.2628% ( 812) 00:15:22.160 2.193 - 2.204: 79.2843% ( 135) 00:15:22.160 2.204 - 2.216: 80.9336% ( 218) 00:15:22.160 2.216 - 2.228: 84.9361% ( 529) 00:15:22.160 2.228 - 2.240: 85.8591% ( 122) 00:15:22.160 2.240 - 2.252: 88.0911% ( 295) 00:15:22.160 2.252 - 2.264: 92.1616% ( 538) 00:15:22.160 2.264 - 2.276: 92.8350% ( 89) 00:15:22.160 2.276 - 2.287: 93.4251% ( 78) 00:15:22.160 2.287 - 2.299: 94.1590% ( 97) 00:15:22.160 2.299 - 2.311: 94.4238% ( 35) 00:15:22.160 2.311 - 2.323: 94.8778% ( 60) 00:15:22.160 2.323 - 2.335: 95.3772% ( 66) 00:15:22.160 2.335 - 2.347: 95.5588% ( 24) 00:15:22.160 2.347 - 2.359: 95.6420% ( 11) 00:15:22.160 2.359 - 2.370: 95.8917% ( 33) 00:15:22.160 2.370 - 2.382: 96.1035% ( 28) 00:15:22.160 2.382 - 2.394: 96.3607% ( 34) 00:15:22.160 2.394 - 2.406: 96.7996% ( 58) 00:15:22.160 2.406 - 2.418: 97.0417% ( 32) 00:15:22.160 2.418 - 2.430: 97.2157% ( 23) 00:15:22.160 2.430 - 2.441: 97.4276% ( 28) 00:15:22.160 2.441 - 2.453: 97.5864% ( 21) 00:15:22.160 2.453 - 2.465: 97.7605% ( 23) 00:15:22.160 2.465 - 2.477: 97.8513% ( 12) 00:15:22.160 2.477 - 2.489: 97.9799% ( 17) 00:15:22.160 2.489 - 2.501: 98.1085% ( 17) 00:15:22.160 2.501 - 2.513: 98.2144% ( 14) 00:15:22.160 2.513 - 2.524: 98.2825% ( 9) 00:15:22.160 2.524 - 2.536: 98.3128% ( 4) 00:15:22.160 2.536 - 2.548: 98.3430% ( 4) 00:15:22.160 2.548 - 2.560: 98.3506% ( 1) 00:15:22.160 2.560 - 2.572: 98.3809% ( 4) 00:15:22.160 2.572 - 2.584: 98.3884% ( 1) 00:15:22.160 2.584 - 2.596: 98.4111% ( 3) 00:15:22.160 2.596 - 2.607: 98.4263% ( 2) 00:15:22.160 2.631 - 2.643: 98.4414% ( 2) 00:15:22.160 2.643 - 2.655: 98.4565% ( 2) 00:15:22.160 2.667 - 2.679: 98.4717% ( 2) 00:15:22.160 2.702 - 2.714: 98.4792% ( 1) 00:15:22.160 2.773 - 2.785: 98.4868% ( 1) 00:15:22.160 2.785 - 2.797: 98.4944% ( 1) 00:15:22.160 2.809 - 2.821: 98.5019% ( 1) 00:15:22.160 2.951 - 2.963: 98.5095% ( 1) 00:15:22.160 3.010 - 3.022: 98.5246% ( 2) 00:15:22.160 3.200 - 3.224: 98.5398% ( 2) 00:15:22.160 3.247 - 3.271: 98.5549% ( 2) 00:15:22.160 3.295 - 3.319: 98.5625% ( 1) 00:15:22.160 3.319 - 3.342: 98.5852% ( 3) 00:15:22.160 3.342 - 3.366: 98.6003% ( 2) 00:15:22.160 3.366 - 3.390: 98.6079% ( 1) 00:15:22.160 3.390 - 3.413: 98.6230% ( 2) 00:15:22.160 3.413 - 3.437: 98.6381% ( 2) 00:15:22.160 3.461 - 3.484: 98.6532% ( 2) 00:15:22.160 3.484 - 3.508: 98.6608% ( 1) 00:15:22.160 3.532 - 3.556: 98.6684% ( 1) 00:15:22.160 3.556 - 3.579: 98.6759% ( 1) 00:15:22.160 3.627 - 3.650: 98.6835% ( 1) 00:15:22.160 3.698 - 3.721: 98.6911% ( 1) 00:15:22.160 3.721 - 3.745: 98.6986% ( 1) 00:15:22.160 3.745 - 3.769: 98.7062% ( 1) 00:15:22.160 3.840 - 3.864: 98.7138% ( 1) 00:15:22.160 3.935 - 3.959: 98.7213% ( 1) 00:15:22.160 4.077 - 4.101: 9[2024-05-15 16:36:29.310388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.160 8.7289% ( 1) 00:15:22.160 5.001 - 5.025: 98.7365% ( 1) 00:15:22.160 5.120 - 5.144: 98.7440% ( 1) 00:15:22.160 5.144 - 5.167: 98.7516% ( 1) 00:15:22.160 5.239 - 5.262: 98.7592% ( 1) 00:15:22.160 5.310 - 5.333: 98.7667% ( 1) 00:15:22.160 5.428 - 5.452: 98.7743% ( 1) 00:15:22.160 5.547 - 5.570: 98.7970% ( 3) 00:15:22.160 5.641 - 5.665: 98.8046% ( 1) 00:15:22.160 5.689 - 5.713: 98.8121% ( 1) 00:15:22.160 5.760 - 5.784: 98.8197% ( 1) 00:15:22.160 5.807 - 5.831: 98.8273% ( 1) 00:15:22.160 5.831 - 5.855: 98.8424% ( 2) 00:15:22.160 5.902 - 5.926: 98.8575% ( 2) 00:15:22.160 6.044 - 6.068: 98.8651% ( 1) 00:15:22.160 6.210 - 6.258: 98.8802% ( 2) 00:15:22.160 6.258 - 6.305: 98.8954% ( 2) 00:15:22.160 6.637 - 6.684: 98.9029% ( 1) 00:15:22.160 15.360 - 15.455: 98.9105% ( 1) 00:15:22.160 15.550 - 15.644: 98.9181% ( 1) 00:15:22.160 15.644 - 15.739: 98.9408% ( 3) 00:15:22.160 15.739 - 15.834: 98.9635% ( 3) 00:15:22.160 15.834 - 15.929: 98.9710% ( 1) 00:15:22.160 15.929 - 16.024: 99.0013% ( 4) 00:15:22.160 16.024 - 16.119: 99.0316% ( 4) 00:15:22.160 16.119 - 16.213: 99.0467% ( 2) 00:15:22.160 16.213 - 16.308: 99.0694% ( 3) 00:15:22.160 16.308 - 16.403: 99.1072% ( 5) 00:15:22.160 16.403 - 16.498: 99.1375% ( 4) 00:15:22.160 16.498 - 16.593: 99.2056% ( 9) 00:15:22.160 16.593 - 16.687: 99.2358% ( 4) 00:15:22.160 16.687 - 16.782: 99.2510% ( 2) 00:15:22.160 16.782 - 16.877: 99.2888% ( 5) 00:15:22.160 16.877 - 16.972: 99.2964% ( 1) 00:15:22.160 17.067 - 17.161: 99.3115% ( 2) 00:15:22.160 17.256 - 17.351: 99.3191% ( 1) 00:15:22.160 17.351 - 17.446: 99.3418% ( 3) 00:15:22.160 17.446 - 17.541: 99.3569% ( 2) 00:15:22.160 18.110 - 18.204: 99.3720% ( 2) 00:15:22.160 18.868 - 18.963: 99.3796% ( 1) 00:15:22.160 3009.801 - 3021.938: 99.3872% ( 1) 00:15:22.161 3203.982 - 3228.255: 99.3947% ( 1) 00:15:22.161 3980.705 - 4004.978: 99.8714% ( 63) 00:15:22.161 4004.978 - 4029.250: 100.0000% ( 17) 00:15:22.161 00:15:22.161 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:22.161 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:22.161 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:22.161 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:22.161 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.419 [ 00:15:22.419 { 00:15:22.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.419 "subtype": "Discovery", 00:15:22.419 "listen_addresses": [], 00:15:22.419 "allow_any_host": true, 00:15:22.419 "hosts": [] 00:15:22.419 }, 00:15:22.419 { 00:15:22.419 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.419 "subtype": "NVMe", 00:15:22.419 "listen_addresses": [ 00:15:22.419 { 00:15:22.419 "trtype": "VFIOUSER", 00:15:22.419 "adrfam": "IPv4", 00:15:22.419 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.419 "trsvcid": "0" 00:15:22.419 } 00:15:22.419 ], 00:15:22.419 "allow_any_host": true, 00:15:22.419 "hosts": [], 00:15:22.419 "serial_number": "SPDK1", 00:15:22.419 "model_number": "SPDK bdev Controller", 00:15:22.419 "max_namespaces": 32, 00:15:22.419 "min_cntlid": 1, 00:15:22.419 "max_cntlid": 65519, 00:15:22.419 "namespaces": [ 00:15:22.419 { 00:15:22.419 "nsid": 1, 00:15:22.419 "bdev_name": "Malloc1", 00:15:22.419 "name": "Malloc1", 00:15:22.419 "nguid": "3523E715EBA5459D9C872FF958EAC551", 00:15:22.419 "uuid": "3523e715-eba5-459d-9c87-2ff958eac551" 00:15:22.419 } 00:15:22.419 ] 00:15:22.419 }, 00:15:22.419 { 00:15:22.419 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.419 "subtype": "NVMe", 00:15:22.419 "listen_addresses": [ 00:15:22.419 { 00:15:22.419 "trtype": "VFIOUSER", 00:15:22.419 "adrfam": "IPv4", 00:15:22.419 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.419 "trsvcid": "0" 00:15:22.419 } 00:15:22.419 ], 00:15:22.419 "allow_any_host": true, 00:15:22.419 "hosts": [], 00:15:22.419 "serial_number": "SPDK2", 00:15:22.419 "model_number": "SPDK bdev Controller", 00:15:22.419 "max_namespaces": 32, 00:15:22.419 "min_cntlid": 1, 00:15:22.419 "max_cntlid": 65519, 00:15:22.419 "namespaces": [ 00:15:22.419 { 00:15:22.419 "nsid": 1, 00:15:22.419 "bdev_name": "Malloc2", 00:15:22.419 "name": "Malloc2", 00:15:22.419 "nguid": "3DED014893694A80A18DA8B1E19E252C", 00:15:22.419 "uuid": "3ded0148-9369-4a80-a18d-a8b1e19e252c" 00:15:22.419 } 00:15:22.419 ] 00:15:22.419 } 00:15:22.419 ] 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1737490 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:22.679 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:22.679 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.679 [2024-05-15 16:36:29.817767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.937 Malloc3 00:15:22.937 16:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:22.937 [2024-05-15 16:36:30.162239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.195 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.195 Asynchronous Event Request test 00:15:23.195 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.195 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.195 Registering asynchronous event callbacks... 00:15:23.195 Starting namespace attribute notice tests for all controllers... 00:15:23.195 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.195 aer_cb - Changed Namespace 00:15:23.195 Cleaning up... 00:15:23.195 [ 00:15:23.195 { 00:15:23.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.195 "subtype": "Discovery", 00:15:23.195 "listen_addresses": [], 00:15:23.195 "allow_any_host": true, 00:15:23.195 "hosts": [] 00:15:23.195 }, 00:15:23.195 { 00:15:23.195 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.195 "subtype": "NVMe", 00:15:23.195 "listen_addresses": [ 00:15:23.195 { 00:15:23.195 "trtype": "VFIOUSER", 00:15:23.195 "adrfam": "IPv4", 00:15:23.195 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.195 "trsvcid": "0" 00:15:23.195 } 00:15:23.195 ], 00:15:23.195 "allow_any_host": true, 00:15:23.195 "hosts": [], 00:15:23.195 "serial_number": "SPDK1", 00:15:23.195 "model_number": "SPDK bdev Controller", 00:15:23.195 "max_namespaces": 32, 00:15:23.195 "min_cntlid": 1, 00:15:23.195 "max_cntlid": 65519, 00:15:23.195 "namespaces": [ 00:15:23.195 { 00:15:23.195 "nsid": 1, 00:15:23.195 "bdev_name": "Malloc1", 00:15:23.195 "name": "Malloc1", 00:15:23.195 "nguid": "3523E715EBA5459D9C872FF958EAC551", 00:15:23.195 "uuid": "3523e715-eba5-459d-9c87-2ff958eac551" 00:15:23.195 }, 00:15:23.195 { 00:15:23.195 "nsid": 2, 00:15:23.195 "bdev_name": "Malloc3", 00:15:23.195 "name": "Malloc3", 00:15:23.195 "nguid": "B06DD6B39A7A4BF9A93109BAB4A7B42B", 00:15:23.195 "uuid": "b06dd6b3-9a7a-4bf9-a931-09bab4a7b42b" 00:15:23.195 } 00:15:23.195 ] 00:15:23.195 }, 00:15:23.195 { 00:15:23.195 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.195 "subtype": "NVMe", 00:15:23.195 "listen_addresses": [ 00:15:23.196 { 00:15:23.196 "trtype": "VFIOUSER", 00:15:23.196 "adrfam": "IPv4", 00:15:23.196 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.196 "trsvcid": "0" 00:15:23.196 } 00:15:23.196 ], 00:15:23.196 "allow_any_host": true, 00:15:23.196 "hosts": [], 00:15:23.196 "serial_number": "SPDK2", 00:15:23.196 "model_number": "SPDK bdev Controller", 00:15:23.196 "max_namespaces": 32, 00:15:23.196 "min_cntlid": 1, 00:15:23.196 "max_cntlid": 65519, 00:15:23.196 "namespaces": [ 00:15:23.196 { 00:15:23.196 "nsid": 1, 00:15:23.196 "bdev_name": "Malloc2", 00:15:23.196 "name": "Malloc2", 00:15:23.196 "nguid": "3DED014893694A80A18DA8B1E19E252C", 00:15:23.196 "uuid": "3ded0148-9369-4a80-a18d-a8b1e19e252c" 00:15:23.196 } 00:15:23.196 ] 00:15:23.196 } 00:15:23.196 ] 00:15:23.456 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1737490 00:15:23.456 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.456 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:23.456 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:23.456 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:23.456 [2024-05-15 16:36:30.444873] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:15:23.456 [2024-05-15 16:36:30.444919] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737623 ] 00:15:23.456 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.456 [2024-05-15 16:36:30.479464] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:23.456 [2024-05-15 16:36:30.487551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:23.456 [2024-05-15 16:36:30.487596] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7443ede000 00:15:23.456 [2024-05-15 16:36:30.488550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.456 [2024-05-15 16:36:30.489549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.456 [2024-05-15 16:36:30.490561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.456 [2024-05-15 16:36:30.491569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.456 [2024-05-15 16:36:30.492572] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.456 [2024-05-15 16:36:30.493592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.456 [2024-05-15 16:36:30.494582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.457 [2024-05-15 16:36:30.495589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.457 [2024-05-15 16:36:30.496594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:23.457 [2024-05-15 16:36:30.496615] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7442c94000 00:15:23.457 [2024-05-15 16:36:30.497730] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:23.457 [2024-05-15 16:36:30.513929] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:23.457 [2024-05-15 16:36:30.513961] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:23.457 [2024-05-15 16:36:30.519073] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:23.457 [2024-05-15 16:36:30.519127] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:23.457 [2024-05-15 16:36:30.519210] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:23.457 [2024-05-15 16:36:30.519259] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:23.457 [2024-05-15 16:36:30.519271] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:23.457 [2024-05-15 16:36:30.520078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:23.457 [2024-05-15 16:36:30.520098] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:23.457 [2024-05-15 16:36:30.520109] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:23.457 [2024-05-15 16:36:30.521084] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:23.457 [2024-05-15 16:36:30.521103] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:23.457 [2024-05-15 16:36:30.521116] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:23.457 [2024-05-15 16:36:30.522086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:23.457 [2024-05-15 16:36:30.522106] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:23.457 [2024-05-15 16:36:30.523105] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:23.457 [2024-05-15 16:36:30.523123] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:23.457 [2024-05-15 16:36:30.523132] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:23.457 [2024-05-15 16:36:30.523144] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:23.457 [2024-05-15 16:36:30.523254] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:23.457 [2024-05-15 16:36:30.523264] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:23.457 [2024-05-15 16:36:30.523273] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:23.457 [2024-05-15 16:36:30.524103] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:23.457 [2024-05-15 16:36:30.525106] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:23.457 [2024-05-15 16:36:30.526111] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:23.457 [2024-05-15 16:36:30.527106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.457 [2024-05-15 16:36:30.527187] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:23.457 [2024-05-15 16:36:30.528119] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:23.457 [2024-05-15 16:36:30.528137] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:23.457 [2024-05-15 16:36:30.528146] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.528169] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:23.457 [2024-05-15 16:36:30.528182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.528224] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:23.457 [2024-05-15 16:36:30.528236] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.457 [2024-05-15 16:36:30.528254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.457 [2024-05-15 16:36:30.532233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:23.457 [2024-05-15 16:36:30.532254] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:23.457 [2024-05-15 16:36:30.532278] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:23.457 [2024-05-15 16:36:30.532286] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:23.457 [2024-05-15 16:36:30.532294] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:23.457 [2024-05-15 16:36:30.532308] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:23.457 [2024-05-15 16:36:30.532317] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:23.457 [2024-05-15 16:36:30.532325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.532338] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.532354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:23.457 [2024-05-15 16:36:30.540227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:23.457 [2024-05-15 16:36:30.540249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.457 [2024-05-15 16:36:30.540277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.457 [2024-05-15 16:36:30.540290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.457 [2024-05-15 16:36:30.540307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.457 [2024-05-15 16:36:30.540317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.540334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.540348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:23.457 [2024-05-15 16:36:30.548227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:23.457 [2024-05-15 16:36:30.548258] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:23.457 [2024-05-15 16:36:30.548268] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.548280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.548290] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.548304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:23.457 [2024-05-15 16:36:30.556242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:23.457 [2024-05-15 16:36:30.556305] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.556322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:23.457 [2024-05-15 16:36:30.556335] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:23.457 [2024-05-15 16:36:30.556343] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:23.457 [2024-05-15 16:36:30.556354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:23.457 [2024-05-15 16:36:30.564242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:23.457 [2024-05-15 16:36:30.564265] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:23.457 [2024-05-15 16:36:30.564285] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.564300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.564313] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:23.458 [2024-05-15 16:36:30.564321] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.458 [2024-05-15 16:36:30.564332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.572245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.572273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.572295] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.572309] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:23.458 [2024-05-15 16:36:30.572318] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.458 [2024-05-15 16:36:30.572328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.580225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.580246] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.580259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.580273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.580283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.580292] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.580300] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:23.458 [2024-05-15 16:36:30.580308] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:23.458 [2024-05-15 16:36:30.580317] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:23.458 [2024-05-15 16:36:30.580345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.588241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.588268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.596224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.596265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.604225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.604250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.612227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.612252] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:23.458 [2024-05-15 16:36:30.612262] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:23.458 [2024-05-15 16:36:30.612268] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:23.458 [2024-05-15 16:36:30.612275] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:23.458 [2024-05-15 16:36:30.612285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:23.458 [2024-05-15 16:36:30.612300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:23.458 [2024-05-15 16:36:30.612310] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:23.458 [2024-05-15 16:36:30.612319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.612330] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:23.458 [2024-05-15 16:36:30.612338] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.458 [2024-05-15 16:36:30.612346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.612358] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:23.458 [2024-05-15 16:36:30.612366] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:23.458 [2024-05-15 16:36:30.612375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:23.458 [2024-05-15 16:36:30.620242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.620270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.620286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:23.458 [2024-05-15 16:36:30.620301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:23.458 ===================================================== 00:15:23.458 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.458 ===================================================== 00:15:23.458 Controller Capabilities/Features 00:15:23.458 ================================ 00:15:23.458 Vendor ID: 4e58 00:15:23.458 Subsystem Vendor ID: 4e58 00:15:23.458 Serial Number: SPDK2 00:15:23.458 Model Number: SPDK bdev Controller 00:15:23.458 Firmware Version: 24.05 00:15:23.458 Recommended Arb Burst: 6 00:15:23.458 IEEE OUI Identifier: 8d 6b 50 00:15:23.458 Multi-path I/O 00:15:23.458 May have multiple subsystem ports: Yes 00:15:23.458 May have multiple controllers: Yes 00:15:23.458 Associated with SR-IOV VF: No 00:15:23.458 Max Data Transfer Size: 131072 00:15:23.458 Max Number of Namespaces: 32 00:15:23.458 Max Number of I/O Queues: 127 00:15:23.458 NVMe Specification Version (VS): 1.3 00:15:23.458 NVMe Specification Version (Identify): 1.3 00:15:23.458 Maximum Queue Entries: 256 00:15:23.458 Contiguous Queues Required: Yes 00:15:23.458 Arbitration Mechanisms Supported 00:15:23.458 Weighted Round Robin: Not Supported 00:15:23.458 Vendor Specific: Not Supported 00:15:23.458 Reset Timeout: 15000 ms 00:15:23.458 Doorbell Stride: 4 bytes 00:15:23.458 NVM Subsystem Reset: Not Supported 00:15:23.458 Command Sets Supported 00:15:23.458 NVM Command Set: Supported 00:15:23.458 Boot Partition: Not Supported 00:15:23.458 Memory Page Size Minimum: 4096 bytes 00:15:23.458 Memory Page Size Maximum: 4096 bytes 00:15:23.458 Persistent Memory Region: Not Supported 00:15:23.458 Optional Asynchronous Events Supported 00:15:23.458 Namespace Attribute Notices: Supported 00:15:23.458 Firmware Activation Notices: Not Supported 00:15:23.458 ANA Change Notices: Not Supported 00:15:23.458 PLE Aggregate Log Change Notices: Not Supported 00:15:23.458 LBA Status Info Alert Notices: Not Supported 00:15:23.458 EGE Aggregate Log Change Notices: Not Supported 00:15:23.458 Normal NVM Subsystem Shutdown event: Not Supported 00:15:23.458 Zone Descriptor Change Notices: Not Supported 00:15:23.458 Discovery Log Change Notices: Not Supported 00:15:23.458 Controller Attributes 00:15:23.458 128-bit Host Identifier: Supported 00:15:23.458 Non-Operational Permissive Mode: Not Supported 00:15:23.458 NVM Sets: Not Supported 00:15:23.458 Read Recovery Levels: Not Supported 00:15:23.458 Endurance Groups: Not Supported 00:15:23.458 Predictable Latency Mode: Not Supported 00:15:23.458 Traffic Based Keep ALive: Not Supported 00:15:23.459 Namespace Granularity: Not Supported 00:15:23.459 SQ Associations: Not Supported 00:15:23.459 UUID List: Not Supported 00:15:23.459 Multi-Domain Subsystem: Not Supported 00:15:23.459 Fixed Capacity Management: Not Supported 00:15:23.459 Variable Capacity Management: Not Supported 00:15:23.459 Delete Endurance Group: Not Supported 00:15:23.459 Delete NVM Set: Not Supported 00:15:23.459 Extended LBA Formats Supported: Not Supported 00:15:23.459 Flexible Data Placement Supported: Not Supported 00:15:23.459 00:15:23.459 Controller Memory Buffer Support 00:15:23.459 ================================ 00:15:23.459 Supported: No 00:15:23.459 00:15:23.459 Persistent Memory Region Support 00:15:23.459 ================================ 00:15:23.459 Supported: No 00:15:23.459 00:15:23.459 Admin Command Set Attributes 00:15:23.459 ============================ 00:15:23.459 Security Send/Receive: Not Supported 00:15:23.459 Format NVM: Not Supported 00:15:23.459 Firmware Activate/Download: Not Supported 00:15:23.459 Namespace Management: Not Supported 00:15:23.459 Device Self-Test: Not Supported 00:15:23.459 Directives: Not Supported 00:15:23.459 NVMe-MI: Not Supported 00:15:23.459 Virtualization Management: Not Supported 00:15:23.459 Doorbell Buffer Config: Not Supported 00:15:23.459 Get LBA Status Capability: Not Supported 00:15:23.459 Command & Feature Lockdown Capability: Not Supported 00:15:23.459 Abort Command Limit: 4 00:15:23.459 Async Event Request Limit: 4 00:15:23.459 Number of Firmware Slots: N/A 00:15:23.459 Firmware Slot 1 Read-Only: N/A 00:15:23.459 Firmware Activation Without Reset: N/A 00:15:23.459 Multiple Update Detection Support: N/A 00:15:23.459 Firmware Update Granularity: No Information Provided 00:15:23.459 Per-Namespace SMART Log: No 00:15:23.459 Asymmetric Namespace Access Log Page: Not Supported 00:15:23.459 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:23.459 Command Effects Log Page: Supported 00:15:23.459 Get Log Page Extended Data: Supported 00:15:23.459 Telemetry Log Pages: Not Supported 00:15:23.459 Persistent Event Log Pages: Not Supported 00:15:23.459 Supported Log Pages Log Page: May Support 00:15:23.459 Commands Supported & Effects Log Page: Not Supported 00:15:23.459 Feature Identifiers & Effects Log Page:May Support 00:15:23.459 NVMe-MI Commands & Effects Log Page: May Support 00:15:23.459 Data Area 4 for Telemetry Log: Not Supported 00:15:23.459 Error Log Page Entries Supported: 128 00:15:23.459 Keep Alive: Supported 00:15:23.459 Keep Alive Granularity: 10000 ms 00:15:23.459 00:15:23.459 NVM Command Set Attributes 00:15:23.459 ========================== 00:15:23.459 Submission Queue Entry Size 00:15:23.459 Max: 64 00:15:23.459 Min: 64 00:15:23.459 Completion Queue Entry Size 00:15:23.459 Max: 16 00:15:23.459 Min: 16 00:15:23.459 Number of Namespaces: 32 00:15:23.459 Compare Command: Supported 00:15:23.459 Write Uncorrectable Command: Not Supported 00:15:23.459 Dataset Management Command: Supported 00:15:23.459 Write Zeroes Command: Supported 00:15:23.459 Set Features Save Field: Not Supported 00:15:23.459 Reservations: Not Supported 00:15:23.459 Timestamp: Not Supported 00:15:23.459 Copy: Supported 00:15:23.459 Volatile Write Cache: Present 00:15:23.459 Atomic Write Unit (Normal): 1 00:15:23.459 Atomic Write Unit (PFail): 1 00:15:23.459 Atomic Compare & Write Unit: 1 00:15:23.459 Fused Compare & Write: Supported 00:15:23.459 Scatter-Gather List 00:15:23.459 SGL Command Set: Supported (Dword aligned) 00:15:23.459 SGL Keyed: Not Supported 00:15:23.459 SGL Bit Bucket Descriptor: Not Supported 00:15:23.459 SGL Metadata Pointer: Not Supported 00:15:23.459 Oversized SGL: Not Supported 00:15:23.459 SGL Metadata Address: Not Supported 00:15:23.459 SGL Offset: Not Supported 00:15:23.459 Transport SGL Data Block: Not Supported 00:15:23.459 Replay Protected Memory Block: Not Supported 00:15:23.459 00:15:23.459 Firmware Slot Information 00:15:23.459 ========================= 00:15:23.459 Active slot: 1 00:15:23.459 Slot 1 Firmware Revision: 24.05 00:15:23.459 00:15:23.459 00:15:23.459 Commands Supported and Effects 00:15:23.459 ============================== 00:15:23.459 Admin Commands 00:15:23.459 -------------- 00:15:23.459 Get Log Page (02h): Supported 00:15:23.459 Identify (06h): Supported 00:15:23.459 Abort (08h): Supported 00:15:23.459 Set Features (09h): Supported 00:15:23.459 Get Features (0Ah): Supported 00:15:23.459 Asynchronous Event Request (0Ch): Supported 00:15:23.459 Keep Alive (18h): Supported 00:15:23.459 I/O Commands 00:15:23.459 ------------ 00:15:23.459 Flush (00h): Supported LBA-Change 00:15:23.459 Write (01h): Supported LBA-Change 00:15:23.459 Read (02h): Supported 00:15:23.459 Compare (05h): Supported 00:15:23.459 Write Zeroes (08h): Supported LBA-Change 00:15:23.459 Dataset Management (09h): Supported LBA-Change 00:15:23.459 Copy (19h): Supported LBA-Change 00:15:23.459 Unknown (79h): Supported LBA-Change 00:15:23.459 Unknown (7Ah): Supported 00:15:23.459 00:15:23.459 Error Log 00:15:23.459 ========= 00:15:23.459 00:15:23.459 Arbitration 00:15:23.459 =========== 00:15:23.459 Arbitration Burst: 1 00:15:23.459 00:15:23.459 Power Management 00:15:23.459 ================ 00:15:23.459 Number of Power States: 1 00:15:23.459 Current Power State: Power State #0 00:15:23.459 Power State #0: 00:15:23.459 Max Power: 0.00 W 00:15:23.459 Non-Operational State: Operational 00:15:23.459 Entry Latency: Not Reported 00:15:23.459 Exit Latency: Not Reported 00:15:23.460 Relative Read Throughput: 0 00:15:23.460 Relative Read Latency: 0 00:15:23.460 Relative Write Throughput: 0 00:15:23.460 Relative Write Latency: 0 00:15:23.460 Idle Power: Not Reported 00:15:23.460 Active Power: Not Reported 00:15:23.460 Non-Operational Permissive Mode: Not Supported 00:15:23.460 00:15:23.460 Health Information 00:15:23.460 ================== 00:15:23.460 Critical Warnings: 00:15:23.460 Available Spare Space: OK 00:15:23.460 Temperature: OK 00:15:23.460 Device Reliability: OK 00:15:23.460 Read Only: No 00:15:23.460 Volatile Memory Backup: OK 00:15:23.460 Current Temperature: 0 Kelvin (-2[2024-05-15 16:36:30.620426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:23.460 [2024-05-15 16:36:30.628225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:23.460 [2024-05-15 16:36:30.628271] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:23.460 [2024-05-15 16:36:30.628287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.460 [2024-05-15 16:36:30.628298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.460 [2024-05-15 16:36:30.628308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.460 [2024-05-15 16:36:30.628318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.460 [2024-05-15 16:36:30.628399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:23.460 [2024-05-15 16:36:30.628420] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:23.460 [2024-05-15 16:36:30.629394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.460 [2024-05-15 16:36:30.632234] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:23.460 [2024-05-15 16:36:30.632250] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:23.460 [2024-05-15 16:36:30.632421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:23.460 [2024-05-15 16:36:30.632444] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:23.460 [2024-05-15 16:36:30.632514] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:23.460 [2024-05-15 16:36:30.633712] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:23.460 73 Celsius) 00:15:23.460 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:23.460 Available Spare: 0% 00:15:23.460 Available Spare Threshold: 0% 00:15:23.460 Life Percentage Used: 0% 00:15:23.460 Data Units Read: 0 00:15:23.460 Data Units Written: 0 00:15:23.460 Host Read Commands: 0 00:15:23.460 Host Write Commands: 0 00:15:23.460 Controller Busy Time: 0 minutes 00:15:23.460 Power Cycles: 0 00:15:23.460 Power On Hours: 0 hours 00:15:23.460 Unsafe Shutdowns: 0 00:15:23.460 Unrecoverable Media Errors: 0 00:15:23.460 Lifetime Error Log Entries: 0 00:15:23.460 Warning Temperature Time: 0 minutes 00:15:23.460 Critical Temperature Time: 0 minutes 00:15:23.460 00:15:23.460 Number of Queues 00:15:23.460 ================ 00:15:23.460 Number of I/O Submission Queues: 127 00:15:23.460 Number of I/O Completion Queues: 127 00:15:23.460 00:15:23.460 Active Namespaces 00:15:23.460 ================= 00:15:23.460 Namespace ID:1 00:15:23.460 Error Recovery Timeout: Unlimited 00:15:23.460 Command Set Identifier: NVM (00h) 00:15:23.460 Deallocate: Supported 00:15:23.460 Deallocated/Unwritten Error: Not Supported 00:15:23.460 Deallocated Read Value: Unknown 00:15:23.460 Deallocate in Write Zeroes: Not Supported 00:15:23.460 Deallocated Guard Field: 0xFFFF 00:15:23.460 Flush: Supported 00:15:23.460 Reservation: Supported 00:15:23.460 Namespace Sharing Capabilities: Multiple Controllers 00:15:23.460 Size (in LBAs): 131072 (0GiB) 00:15:23.460 Capacity (in LBAs): 131072 (0GiB) 00:15:23.460 Utilization (in LBAs): 131072 (0GiB) 00:15:23.460 NGUID: 3DED014893694A80A18DA8B1E19E252C 00:15:23.460 UUID: 3ded0148-9369-4a80-a18d-a8b1e19e252c 00:15:23.460 Thin Provisioning: Not Supported 00:15:23.460 Per-NS Atomic Units: Yes 00:15:23.460 Atomic Boundary Size (Normal): 0 00:15:23.460 Atomic Boundary Size (PFail): 0 00:15:23.460 Atomic Boundary Offset: 0 00:15:23.460 Maximum Single Source Range Length: 65535 00:15:23.460 Maximum Copy Length: 65535 00:15:23.460 Maximum Source Range Count: 1 00:15:23.460 NGUID/EUI64 Never Reused: No 00:15:23.460 Namespace Write Protected: No 00:15:23.460 Number of LBA Formats: 1 00:15:23.460 Current LBA Format: LBA Format #00 00:15:23.460 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:23.460 00:15:23.460 16:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:23.718 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.718 [2024-05-15 16:36:30.860148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.978 Initializing NVMe Controllers 00:15:28.978 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.978 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:28.978 Initialization complete. Launching workers. 00:15:28.978 ======================================================== 00:15:28.978 Latency(us) 00:15:28.978 Device Information : IOPS MiB/s Average min max 00:15:28.978 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34250.68 133.79 3736.45 1155.64 8211.09 00:15:28.978 ======================================================== 00:15:28.978 Total : 34250.68 133.79 3736.45 1155.64 8211.09 00:15:28.978 00:15:28.978 [2024-05-15 16:36:35.966564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.978 16:36:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:28.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.978 [2024-05-15 16:36:36.195195] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.287 Initializing NVMe Controllers 00:15:34.287 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.287 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.288 Initialization complete. Launching workers. 00:15:34.288 ======================================================== 00:15:34.288 Latency(us) 00:15:34.288 Device Information : IOPS MiB/s Average min max 00:15:34.288 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31720.98 123.91 4034.49 1200.40 10373.67 00:15:34.288 ======================================================== 00:15:34.288 Total : 31720.98 123.91 4034.49 1200.40 10373.67 00:15:34.288 00:15:34.288 [2024-05-15 16:36:41.216072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.288 16:36:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:34.288 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.288 [2024-05-15 16:36:41.447995] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.551 [2024-05-15 16:36:46.578361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.551 Initializing NVMe Controllers 00:15:39.551 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.551 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:39.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:39.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:39.551 Initialization complete. Launching workers. 00:15:39.551 Starting thread on core 2 00:15:39.551 Starting thread on core 3 00:15:39.551 Starting thread on core 1 00:15:39.551 16:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:39.551 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.809 [2024-05-15 16:36:46.888657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.089 [2024-05-15 16:36:49.948186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.089 Initializing NVMe Controllers 00:15:43.089 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.089 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.089 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:43.089 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:43.089 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:43.089 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:43.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:43.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:43.089 Initialization complete. Launching workers. 00:15:43.089 Starting thread on core 1 with urgent priority queue 00:15:43.089 Starting thread on core 2 with urgent priority queue 00:15:43.089 Starting thread on core 3 with urgent priority queue 00:15:43.089 Starting thread on core 0 with urgent priority queue 00:15:43.089 SPDK bdev Controller (SPDK2 ) core 0: 6038.67 IO/s 16.56 secs/100000 ios 00:15:43.089 SPDK bdev Controller (SPDK2 ) core 1: 5611.00 IO/s 17.82 secs/100000 ios 00:15:43.089 SPDK bdev Controller (SPDK2 ) core 2: 6203.67 IO/s 16.12 secs/100000 ios 00:15:43.089 SPDK bdev Controller (SPDK2 ) core 3: 5328.33 IO/s 18.77 secs/100000 ios 00:15:43.089 ======================================================== 00:15:43.089 00:15:43.090 16:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.090 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.090 [2024-05-15 16:36:50.266933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.090 Initializing NVMe Controllers 00:15:43.090 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.090 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.090 Namespace ID: 1 size: 0GB 00:15:43.090 Initialization complete. 00:15:43.090 INFO: using host memory buffer for IO 00:15:43.090 Hello world! 00:15:43.090 [2024-05-15 16:36:50.275990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.347 16:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.347 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.604 [2024-05-15 16:36:50.577278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.538 Initializing NVMe Controllers 00:15:44.538 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:44.538 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:44.538 Initialization complete. Launching workers. 00:15:44.538 submit (in ns) avg, min, max = 7234.5, 3542.2, 4001998.9 00:15:44.538 complete (in ns) avg, min, max = 24734.7, 2056.7, 4021734.4 00:15:44.538 00:15:44.538 Submit histogram 00:15:44.538 ================ 00:15:44.538 Range in us Cumulative Count 00:15:44.538 3.532 - 3.556: 0.0375% ( 5) 00:15:44.538 3.556 - 3.579: 2.6168% ( 344) 00:15:44.538 3.579 - 3.603: 11.3444% ( 1164) 00:15:44.538 3.603 - 3.627: 23.7310% ( 1652) 00:15:44.538 3.627 - 3.650: 33.3958% ( 1289) 00:15:44.538 3.650 - 3.674: 39.1392% ( 766) 00:15:44.538 3.674 - 3.698: 44.7102% ( 743) 00:15:44.538 3.698 - 3.721: 50.7910% ( 811) 00:15:44.538 3.721 - 3.745: 56.0396% ( 700) 00:15:44.538 3.745 - 3.769: 59.6836% ( 486) 00:15:44.538 3.769 - 3.793: 62.3529% ( 356) 00:15:44.538 3.793 - 3.816: 65.2171% ( 382) 00:15:44.538 3.816 - 3.840: 69.1910% ( 530) 00:15:44.538 3.840 - 3.864: 74.7844% ( 746) 00:15:44.538 3.864 - 3.887: 79.3957% ( 615) 00:15:44.538 3.887 - 3.911: 82.6048% ( 428) 00:15:44.538 3.911 - 3.935: 84.9067% ( 307) 00:15:44.538 3.935 - 3.959: 86.8861% ( 264) 00:15:44.538 3.959 - 3.982: 88.6106% ( 230) 00:15:44.538 3.982 - 4.006: 90.0277% ( 189) 00:15:44.538 4.006 - 4.030: 91.1824% ( 154) 00:15:44.538 4.030 - 4.053: 92.0972% ( 122) 00:15:44.538 4.053 - 4.077: 93.0119% ( 122) 00:15:44.538 4.077 - 4.101: 94.1441% ( 151) 00:15:44.538 4.101 - 4.124: 94.8264% ( 91) 00:15:44.538 4.124 - 4.148: 95.4712% ( 86) 00:15:44.538 4.148 - 4.172: 95.7637% ( 39) 00:15:44.538 4.172 - 4.196: 96.1161% ( 47) 00:15:44.538 4.196 - 4.219: 96.3410% ( 30) 00:15:44.538 4.219 - 4.243: 96.5659% ( 30) 00:15:44.538 4.243 - 4.267: 96.7459% ( 24) 00:15:44.538 4.267 - 4.290: 96.9183% ( 23) 00:15:44.538 4.290 - 4.314: 97.0608% ( 19) 00:15:44.538 4.314 - 4.338: 97.1433% ( 11) 00:15:44.538 4.338 - 4.361: 97.2408% ( 13) 00:15:44.538 4.361 - 4.385: 97.3007% ( 8) 00:15:44.538 4.385 - 4.409: 97.3532% ( 7) 00:15:44.538 4.409 - 4.433: 97.3982% ( 6) 00:15:44.538 4.433 - 4.456: 97.4357% ( 5) 00:15:44.538 4.456 - 4.480: 97.4507% ( 2) 00:15:44.538 4.480 - 4.504: 97.5032% ( 7) 00:15:44.538 4.527 - 4.551: 97.5107% ( 1) 00:15:44.538 4.551 - 4.575: 97.5182% ( 1) 00:15:44.538 4.575 - 4.599: 97.5257% ( 1) 00:15:44.538 4.599 - 4.622: 97.5407% ( 2) 00:15:44.538 4.622 - 4.646: 97.5482% ( 1) 00:15:44.538 4.693 - 4.717: 97.5557% ( 1) 00:15:44.538 4.741 - 4.764: 97.5632% ( 1) 00:15:44.538 4.764 - 4.788: 97.5932% ( 4) 00:15:44.538 4.788 - 4.812: 97.6082% ( 2) 00:15:44.538 4.812 - 4.836: 97.6531% ( 6) 00:15:44.538 4.836 - 4.859: 97.6831% ( 4) 00:15:44.538 4.859 - 4.883: 97.7056% ( 3) 00:15:44.538 4.883 - 4.907: 97.7356% ( 4) 00:15:44.538 4.907 - 4.930: 97.7656% ( 4) 00:15:44.538 4.930 - 4.954: 97.8331% ( 9) 00:15:44.538 4.954 - 4.978: 97.8931% ( 8) 00:15:44.538 4.978 - 5.001: 97.9081% ( 2) 00:15:44.538 5.001 - 5.025: 97.9306% ( 3) 00:15:44.538 5.025 - 5.049: 97.9531% ( 3) 00:15:44.538 5.049 - 5.073: 97.9981% ( 6) 00:15:44.538 5.073 - 5.096: 98.0355% ( 5) 00:15:44.538 5.096 - 5.120: 98.1030% ( 9) 00:15:44.538 5.120 - 5.144: 98.1405% ( 5) 00:15:44.538 5.144 - 5.167: 98.1555% ( 2) 00:15:44.538 5.167 - 5.191: 98.1930% ( 5) 00:15:44.538 5.191 - 5.215: 98.2080% ( 2) 00:15:44.538 5.215 - 5.239: 98.2305% ( 3) 00:15:44.538 5.239 - 5.262: 98.2380% ( 1) 00:15:44.538 5.262 - 5.286: 98.2530% ( 2) 00:15:44.538 5.286 - 5.310: 98.2680% ( 2) 00:15:44.538 5.310 - 5.333: 98.2905% ( 3) 00:15:44.538 5.333 - 5.357: 98.3205% ( 4) 00:15:44.538 5.381 - 5.404: 98.3280% ( 1) 00:15:44.538 5.404 - 5.428: 98.3505% ( 3) 00:15:44.538 5.428 - 5.452: 98.3580% ( 1) 00:15:44.538 5.452 - 5.476: 98.3654% ( 1) 00:15:44.538 5.618 - 5.641: 98.3804% ( 2) 00:15:44.538 5.665 - 5.689: 98.3879% ( 1) 00:15:44.538 5.713 - 5.736: 98.3954% ( 1) 00:15:44.538 5.760 - 5.784: 98.4029% ( 1) 00:15:44.538 5.807 - 5.831: 98.4104% ( 1) 00:15:44.538 5.950 - 5.973: 98.4179% ( 1) 00:15:44.538 6.044 - 6.068: 98.4254% ( 1) 00:15:44.538 6.068 - 6.116: 98.4404% ( 2) 00:15:44.538 6.116 - 6.163: 98.4479% ( 1) 00:15:44.538 6.163 - 6.210: 98.4554% ( 1) 00:15:44.538 6.258 - 6.305: 98.4704% ( 2) 00:15:44.538 6.305 - 6.353: 98.4854% ( 2) 00:15:44.538 6.447 - 6.495: 98.4929% ( 1) 00:15:44.538 6.542 - 6.590: 98.5004% ( 1) 00:15:44.538 6.732 - 6.779: 98.5229% ( 3) 00:15:44.538 6.779 - 6.827: 98.5304% ( 1) 00:15:44.538 6.874 - 6.921: 98.5379% ( 1) 00:15:44.538 6.969 - 7.016: 98.5454% ( 1) 00:15:44.538 7.016 - 7.064: 98.5529% ( 1) 00:15:44.538 7.064 - 7.111: 98.5604% ( 1) 00:15:44.538 7.111 - 7.159: 98.5754% ( 2) 00:15:44.538 7.159 - 7.206: 98.5829% ( 1) 00:15:44.538 7.206 - 7.253: 98.5904% ( 1) 00:15:44.538 7.301 - 7.348: 98.5979% ( 1) 00:15:44.538 7.348 - 7.396: 98.6054% ( 1) 00:15:44.538 7.396 - 7.443: 98.6129% ( 1) 00:15:44.538 7.538 - 7.585: 98.6204% ( 1) 00:15:44.539 7.633 - 7.680: 98.6279% ( 1) 00:15:44.539 7.680 - 7.727: 98.6354% ( 1) 00:15:44.539 7.775 - 7.822: 98.6504% ( 2) 00:15:44.539 7.822 - 7.870: 98.6654% ( 2) 00:15:44.539 7.917 - 7.964: 98.6804% ( 2) 00:15:44.539 7.964 - 8.012: 98.6954% ( 2) 00:15:44.539 8.012 - 8.059: 98.7104% ( 2) 00:15:44.539 8.107 - 8.154: 98.7179% ( 1) 00:15:44.539 8.154 - 8.201: 98.7254% ( 1) 00:15:44.539 8.296 - 8.344: 98.7328% ( 1) 00:15:44.539 8.344 - 8.391: 98.7403% ( 1) 00:15:44.539 8.391 - 8.439: 98.7478% ( 1) 00:15:44.539 8.439 - 8.486: 98.7553% ( 1) 00:15:44.539 8.533 - 8.581: 98.7778% ( 3) 00:15:44.539 8.818 - 8.865: 98.7853% ( 1) 00:15:44.539 9.007 - 9.055: 98.7928% ( 1) 00:15:44.539 9.055 - 9.102: 98.8003% ( 1) 00:15:44.539 9.292 - 9.339: 98.8078% ( 1) 00:15:44.539 9.908 - 9.956: 98.8153% ( 1) 00:15:44.539 10.667 - 10.714: 98.8228% ( 1) 00:15:44.539 11.473 - 11.520: 98.8303% ( 1) 00:15:44.539 11.615 - 11.662: 98.8453% ( 2) 00:15:44.539 11.804 - 11.852: 98.8528% ( 1) 00:15:44.539 12.136 - 12.231: 98.8603% ( 1) 00:15:44.539 12.326 - 12.421: 98.8678% ( 1) 00:15:44.539 12.516 - 12.610: 98.8828% ( 2) 00:15:44.539 12.705 - 12.800: 98.8978% ( 2) 00:15:44.539 12.895 - 12.990: 98.9053% ( 1) 00:15:44.539 12.990 - 13.084: 98.9128% ( 1) 00:15:44.539 13.179 - 13.274: 98.9203% ( 1) 00:15:44.539 13.559 - 13.653: 98.9278% ( 1) 00:15:44.539 13.653 - 13.748: 98.9353% ( 1) 00:15:44.539 14.412 - 14.507: 98.9428% ( 1) 00:15:44.539 14.601 - 14.696: 98.9503% ( 1) 00:15:44.539 14.696 - 14.791: 98.9578% ( 1) 00:15:44.539 15.076 - 15.170: 98.9653% ( 1) 00:15:44.539 15.455 - 15.550: 98.9728% ( 1) 00:15:44.539 15.834 - 15.929: 98.9803% ( 1) 00:15:44.539 17.161 - 17.256: 98.9953% ( 2) 00:15:44.539 17.256 - 17.351: 99.0178% ( 3) 00:15:44.539 17.351 - 17.446: 99.0253% ( 1) 00:15:44.539 17.446 - 17.541: 99.0328% ( 1) 00:15:44.539 17.541 - 17.636: 99.0553% ( 3) 00:15:44.539 17.636 - 17.730: 99.0778% ( 3) 00:15:44.539 17.730 - 17.825: 99.1227% ( 6) 00:15:44.539 17.825 - 17.920: 99.1902% ( 9) 00:15:44.539 17.920 - 18.015: 99.2877% ( 13) 00:15:44.539 18.015 - 18.110: 99.3252% ( 5) 00:15:44.539 18.110 - 18.204: 99.3927% ( 9) 00:15:44.539 18.204 - 18.299: 99.4377% ( 6) 00:15:44.539 18.299 - 18.394: 99.5426% ( 14) 00:15:44.539 18.394 - 18.489: 99.6026% ( 8) 00:15:44.539 18.489 - 18.584: 99.6176% ( 2) 00:15:44.539 18.584 - 18.679: 99.6626% ( 6) 00:15:44.539 18.679 - 18.773: 99.7001% ( 5) 00:15:44.539 18.773 - 18.868: 99.7376% ( 5) 00:15:44.539 18.868 - 18.963: 99.7451% ( 1) 00:15:44.539 18.963 - 19.058: 99.7751% ( 4) 00:15:44.539 19.058 - 19.153: 99.7826% ( 1) 00:15:44.539 19.153 - 19.247: 99.8051% ( 3) 00:15:44.539 19.247 - 19.342: 99.8200% ( 2) 00:15:44.539 19.437 - 19.532: 99.8500% ( 4) 00:15:44.539 19.532 - 19.627: 99.8575% ( 1) 00:15:44.539 19.627 - 19.721: 99.8650% ( 1) 00:15:44.539 20.006 - 20.101: 99.8800% ( 2) 00:15:44.539 20.764 - 20.859: 99.8875% ( 1) 00:15:44.539 22.092 - 22.187: 99.8950% ( 1) 00:15:44.539 22.376 - 22.471: 99.9025% ( 1) 00:15:44.539 23.893 - 23.988: 99.9100% ( 1) 00:15:44.539 28.444 - 28.634: 99.9175% ( 1) 00:15:44.539 3980.705 - 4004.978: 100.0000% ( 11) 00:15:44.539 00:15:44.539 Complete histogram 00:15:44.539 ================== 00:15:44.539 Range in us Cumulative Count 00:15:44.539 2.050 - 2.062: 0.1200% ( 16) 00:15:44.539 2.062 - 2.074: 12.3191% ( 1627) 00:15:44.539 2.074 - 2.086: 22.9437% ( 1417) 00:15:44.539 2.086 - 2.098: 26.3328% ( 452) 00:15:44.539 2.098 - 2.110: 49.9138% ( 3145) 00:15:44.539 2.110 - 2.121: 56.7969% ( 918) 00:15:44.539 2.121 - 2.133: 58.7388% ( 259) 00:15:44.539 2.133 - 2.145: 64.7522% ( 802) 00:15:44.539 2.145 - 2.157: 67.0016% ( 300) 00:15:44.539 2.157 - 2.169: 70.0532% ( 407) 00:15:44.539 2.169 - 2.181: 77.8736% ( 1043) 00:15:44.539 2.181 - 2.193: 79.8980% ( 270) 00:15:44.539 2.193 - 2.204: 81.0302% ( 151) 00:15:44.539 2.204 - 2.216: 84.3668% ( 445) 00:15:44.539 2.216 - 2.228: 85.8364% ( 196) 00:15:44.539 2.228 - 2.240: 87.0511% ( 162) 00:15:44.539 2.240 - 2.252: 91.4074% ( 581) 00:15:44.539 2.252 - 2.264: 92.7645% ( 181) 00:15:44.539 2.264 - 2.276: 93.2219% ( 61) 00:15:44.539 2.276 - 2.287: 93.9192% ( 93) 00:15:44.539 2.287 - 2.299: 94.2416% ( 43) 00:15:44.539 2.299 - 2.311: 94.4740% ( 31) 00:15:44.539 2.311 - 2.323: 94.9164% ( 59) 00:15:44.539 2.323 - 2.335: 95.2088% ( 39) 00:15:44.539 2.335 - 2.347: 95.3513% ( 19) 00:15:44.539 2.347 - 2.359: 95.6887% ( 45) 00:15:44.539 2.359 - 2.370: 95.9361% ( 33) 00:15:44.539 2.370 - 2.382: 96.2210% ( 38) 00:15:44.539 2.382 - 2.394: 96.6559% ( 58) 00:15:44.539 2.394 - 2.406: 97.0758% ( 56) 00:15:44.539 2.406 - 2.418: 97.2708% ( 26) 00:15:44.539 2.418 - 2.430: 97.4957% ( 30) 00:15:44.539 2.430 - 2.441: 97.7056% ( 28) 00:15:44.539 2.441 - 2.453: 97.8256% ( 16) 00:15:44.539 2.453 - 2.465: 97.9531% ( 17) 00:15:44.539 2.465 - 2.477: 98.0655% ( 15) 00:15:44.539 2.477 - 2.489: 98.1480% ( 11) 00:15:44.539 2.489 - 2.501: 98.2380% ( 12) 00:15:44.539 2.501 - 2.513: 98.3055% ( 9) 00:15:44.539 2.513 - 2.524: 98.3280% ( 3) 00:15:44.539 2.524 - 2.536: 98.3505% ( 3) 00:15:44.539 2.536 - 2.548: 98.3580% ( 1) 00:15:44.539 2.548 - 2.560: 98.3654% ( 1) 00:15:44.539 2.560 - 2.572: 98.3729% ( 1) 00:15:44.539 2.572 - 2.584: 98.3879% ( 2) 00:15:44.539 2.584 - 2.596: 98.3954% ( 1) 00:15:44.539 2.643 - 2.655: 98.4104% ( 2) 00:15:44.539 2.655 - 2.667: 98.4254% ( 2) 00:15:44.539 2.679 - 2.690: 98.4329% ( 1) 00:15:44.539 2.690 - 2.702: 98.4404% ( 1) 00:15:44.539 2.702 - 2.714: 98.4554% ( 2) 00:15:44.539 2.726 - 2.738: 98.4629% ( 1) 00:15:44.539 2.785 - 2.797: 98.4704% ( 1) 00:15:44.539 2.975 - 2.987: 98.4779% ( 1) 00:15:44.539 2.999 - 3.010: 98.4854% ( 1) 00:15:44.539 3.129 - 3.153: 98.4929% ( 1) 00:15:44.539 3.247 - 3.271: 98.5004% ( 1) 00:15:44.539 3.366 - 3.390: 98.5154% ( 2) 00:15:44.539 3.390 - 3.413: 98.5229% ( 1) 00:15:44.539 3.413 - 3.437: 98.5304% ( 1) 00:15:44.539 3.437 - 3.461: 98.5454% ( 2) 00:15:44.539 3.461 - 3.484: 98.5754% ( 4) 00:15:44.539 3.484 - 3.508: 98.5829% ( 1) 00:15:44.539 3.508 - 3.532: 98.5904% ( 1) 00:15:44.539 3.532 - 3.556: 98.5979% ( 1) 00:15:44.539 3.579 - 3.603: 98.6204% ( 3) 00:15:44.539 3.603 - 3.627: 98.6279% ( 1) 00:15:44.539 3.627 - 3.650: 98.6429% ( 2) 00:15:44.539 3.674 - 3.698: 98.6504% ( 1) 00:15:44.539 3.721 - 3.745: 98.6729% ( 3) 00:15:44.539 3.745 - 3.769: 98.6879% ( 2) 00:15:44.539 3.793 - 3.816: 98.6954% ( 1) 00:15:44.539 3.816 - 3.840: 98.7029% ( 1) 00:15:44.539 3.935 - 3.959: 98.7104% ( 1) 00:15:44.539 3.959 - 3.982: 98.7179% ( 1) 00:15:44.539 4.053 - 4.077: 98.7254% ( 1) 00:15:44.539 4.575 - 4.599: 9[2024-05-15 16:36:51.684017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.539 8.7328% ( 1) 00:15:44.539 4.954 - 4.978: 98.7403% ( 1) 00:15:44.539 5.262 - 5.286: 98.7478% ( 1) 00:15:44.539 5.784 - 5.807: 98.7553% ( 1) 00:15:44.539 5.902 - 5.926: 98.7703% ( 2) 00:15:44.539 6.021 - 6.044: 98.7778% ( 1) 00:15:44.539 6.068 - 6.116: 98.7928% ( 2) 00:15:44.539 6.163 - 6.210: 98.8003% ( 1) 00:15:44.539 6.305 - 6.353: 98.8078% ( 1) 00:15:44.539 6.400 - 6.447: 98.8153% ( 1) 00:15:44.539 6.447 - 6.495: 98.8228% ( 1) 00:15:44.539 6.495 - 6.542: 98.8303% ( 1) 00:15:44.539 6.732 - 6.779: 98.8378% ( 1) 00:15:44.539 6.779 - 6.827: 98.8453% ( 1) 00:15:44.539 6.969 - 7.016: 98.8528% ( 1) 00:15:44.539 7.159 - 7.206: 98.8603% ( 1) 00:15:44.539 7.775 - 7.822: 98.8678% ( 1) 00:15:44.539 15.455 - 15.550: 98.8828% ( 2) 00:15:44.539 15.550 - 15.644: 98.8978% ( 2) 00:15:44.539 15.644 - 15.739: 98.9203% ( 3) 00:15:44.539 15.739 - 15.834: 98.9278% ( 1) 00:15:44.539 15.834 - 15.929: 98.9428% ( 2) 00:15:44.539 15.929 - 16.024: 98.9728% ( 4) 00:15:44.539 16.024 - 16.119: 98.9803% ( 1) 00:15:44.539 16.119 - 16.213: 99.0028% ( 3) 00:15:44.539 16.213 - 16.308: 99.0328% ( 4) 00:15:44.539 16.308 - 16.403: 99.0628% ( 4) 00:15:44.539 16.403 - 16.498: 99.0927% ( 4) 00:15:44.539 16.498 - 16.593: 99.1677% ( 10) 00:15:44.539 16.593 - 16.687: 99.2202% ( 7) 00:15:44.539 16.687 - 16.782: 99.2652% ( 6) 00:15:44.539 16.782 - 16.877: 99.2877% ( 3) 00:15:44.539 16.877 - 16.972: 99.3027% ( 2) 00:15:44.539 16.972 - 17.067: 99.3102% ( 1) 00:15:44.539 17.067 - 17.161: 99.3252% ( 2) 00:15:44.539 17.161 - 17.256: 99.3327% ( 1) 00:15:44.539 17.256 - 17.351: 99.3552% ( 3) 00:15:44.539 17.351 - 17.446: 99.3627% ( 1) 00:15:44.539 17.446 - 17.541: 99.3702% ( 1) 00:15:44.539 17.730 - 17.825: 99.3777% ( 1) 00:15:44.539 17.825 - 17.920: 99.3852% ( 1) 00:15:44.539 18.015 - 18.110: 99.4077% ( 3) 00:15:44.539 18.204 - 18.299: 99.4152% ( 1) 00:15:44.539 18.394 - 18.489: 99.4227% ( 1) 00:15:44.539 18.679 - 18.773: 99.4302% ( 1) 00:15:44.540 946.631 - 952.699: 99.4377% ( 1) 00:15:44.540 2997.665 - 3009.801: 99.4452% ( 1) 00:15:44.540 3980.705 - 4004.978: 99.8200% ( 50) 00:15:44.540 4004.978 - 4029.250: 100.0000% ( 24) 00:15:44.540 00:15:44.540 16:36:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:44.540 16:36:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:44.540 16:36:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:44.540 16:36:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:44.540 16:36:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:44.797 [ 00:15:44.797 { 00:15:44.797 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.797 "subtype": "Discovery", 00:15:44.797 "listen_addresses": [], 00:15:44.797 "allow_any_host": true, 00:15:44.797 "hosts": [] 00:15:44.797 }, 00:15:44.797 { 00:15:44.797 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:44.797 "subtype": "NVMe", 00:15:44.797 "listen_addresses": [ 00:15:44.797 { 00:15:44.797 "trtype": "VFIOUSER", 00:15:44.797 "adrfam": "IPv4", 00:15:44.797 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:44.797 "trsvcid": "0" 00:15:44.797 } 00:15:44.797 ], 00:15:44.797 "allow_any_host": true, 00:15:44.797 "hosts": [], 00:15:44.797 "serial_number": "SPDK1", 00:15:44.797 "model_number": "SPDK bdev Controller", 00:15:44.797 "max_namespaces": 32, 00:15:44.797 "min_cntlid": 1, 00:15:44.797 "max_cntlid": 65519, 00:15:44.797 "namespaces": [ 00:15:44.797 { 00:15:44.797 "nsid": 1, 00:15:44.797 "bdev_name": "Malloc1", 00:15:44.797 "name": "Malloc1", 00:15:44.797 "nguid": "3523E715EBA5459D9C872FF958EAC551", 00:15:44.797 "uuid": "3523e715-eba5-459d-9c87-2ff958eac551" 00:15:44.797 }, 00:15:44.797 { 00:15:44.797 "nsid": 2, 00:15:44.797 "bdev_name": "Malloc3", 00:15:44.797 "name": "Malloc3", 00:15:44.797 "nguid": "B06DD6B39A7A4BF9A93109BAB4A7B42B", 00:15:44.797 "uuid": "b06dd6b3-9a7a-4bf9-a931-09bab4a7b42b" 00:15:44.797 } 00:15:44.797 ] 00:15:44.797 }, 00:15:44.797 { 00:15:44.797 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:44.797 "subtype": "NVMe", 00:15:44.797 "listen_addresses": [ 00:15:44.797 { 00:15:44.797 "trtype": "VFIOUSER", 00:15:44.797 "adrfam": "IPv4", 00:15:44.797 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:44.797 "trsvcid": "0" 00:15:44.797 } 00:15:44.798 ], 00:15:44.798 "allow_any_host": true, 00:15:44.798 "hosts": [], 00:15:44.798 "serial_number": "SPDK2", 00:15:44.798 "model_number": "SPDK bdev Controller", 00:15:44.798 "max_namespaces": 32, 00:15:44.798 "min_cntlid": 1, 00:15:44.798 "max_cntlid": 65519, 00:15:44.798 "namespaces": [ 00:15:44.798 { 00:15:44.798 "nsid": 1, 00:15:44.798 "bdev_name": "Malloc2", 00:15:44.798 "name": "Malloc2", 00:15:44.798 "nguid": "3DED014893694A80A18DA8B1E19E252C", 00:15:44.798 "uuid": "3ded0148-9369-4a80-a18d-a8b1e19e252c" 00:15:44.798 } 00:15:44.798 ] 00:15:44.798 } 00:15:44.798 ] 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1740141 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:44.798 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:45.055 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.055 [2024-05-15 16:36:52.189778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.312 Malloc4 00:15:45.312 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:45.312 [2024-05-15 16:36:52.528347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.569 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.569 Asynchronous Event Request test 00:15:45.569 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.569 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.569 Registering asynchronous event callbacks... 00:15:45.569 Starting namespace attribute notice tests for all controllers... 00:15:45.569 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:45.569 aer_cb - Changed Namespace 00:15:45.569 Cleaning up... 00:15:45.569 [ 00:15:45.569 { 00:15:45.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.569 "subtype": "Discovery", 00:15:45.569 "listen_addresses": [], 00:15:45.569 "allow_any_host": true, 00:15:45.569 "hosts": [] 00:15:45.569 }, 00:15:45.569 { 00:15:45.569 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.569 "subtype": "NVMe", 00:15:45.569 "listen_addresses": [ 00:15:45.569 { 00:15:45.569 "trtype": "VFIOUSER", 00:15:45.569 "adrfam": "IPv4", 00:15:45.569 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.569 "trsvcid": "0" 00:15:45.569 } 00:15:45.569 ], 00:15:45.569 "allow_any_host": true, 00:15:45.569 "hosts": [], 00:15:45.569 "serial_number": "SPDK1", 00:15:45.569 "model_number": "SPDK bdev Controller", 00:15:45.569 "max_namespaces": 32, 00:15:45.569 "min_cntlid": 1, 00:15:45.569 "max_cntlid": 65519, 00:15:45.569 "namespaces": [ 00:15:45.569 { 00:15:45.569 "nsid": 1, 00:15:45.569 "bdev_name": "Malloc1", 00:15:45.569 "name": "Malloc1", 00:15:45.570 "nguid": "3523E715EBA5459D9C872FF958EAC551", 00:15:45.570 "uuid": "3523e715-eba5-459d-9c87-2ff958eac551" 00:15:45.570 }, 00:15:45.570 { 00:15:45.570 "nsid": 2, 00:15:45.570 "bdev_name": "Malloc3", 00:15:45.570 "name": "Malloc3", 00:15:45.570 "nguid": "B06DD6B39A7A4BF9A93109BAB4A7B42B", 00:15:45.570 "uuid": "b06dd6b3-9a7a-4bf9-a931-09bab4a7b42b" 00:15:45.570 } 00:15:45.570 ] 00:15:45.570 }, 00:15:45.570 { 00:15:45.570 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.570 "subtype": "NVMe", 00:15:45.570 "listen_addresses": [ 00:15:45.570 { 00:15:45.570 "trtype": "VFIOUSER", 00:15:45.570 "adrfam": "IPv4", 00:15:45.570 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.570 "trsvcid": "0" 00:15:45.570 } 00:15:45.570 ], 00:15:45.570 "allow_any_host": true, 00:15:45.570 "hosts": [], 00:15:45.570 "serial_number": "SPDK2", 00:15:45.570 "model_number": "SPDK bdev Controller", 00:15:45.570 "max_namespaces": 32, 00:15:45.570 "min_cntlid": 1, 00:15:45.570 "max_cntlid": 65519, 00:15:45.570 "namespaces": [ 00:15:45.570 { 00:15:45.570 "nsid": 1, 00:15:45.570 "bdev_name": "Malloc2", 00:15:45.570 "name": "Malloc2", 00:15:45.570 "nguid": "3DED014893694A80A18DA8B1E19E252C", 00:15:45.570 "uuid": "3ded0148-9369-4a80-a18d-a8b1e19e252c" 00:15:45.570 }, 00:15:45.570 { 00:15:45.570 "nsid": 2, 00:15:45.570 "bdev_name": "Malloc4", 00:15:45.570 "name": "Malloc4", 00:15:45.570 "nguid": "094ABBCD27B8487A91EB9AF6F59E6B49", 00:15:45.570 "uuid": "094abbcd-27b8-487a-91eb-9af6f59e6b49" 00:15:45.570 } 00:15:45.570 ] 00:15:45.570 } 00:15:45.570 ] 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1740141 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1734044 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1734044 ']' 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1734044 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1734044 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1734044' 00:15:45.828 killing process with pid 1734044 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1734044 00:15:45.828 [2024-05-15 16:36:52.826873] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:45.828 16:36:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1734044 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1740283 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1740283' 00:15:46.088 Process pid: 1740283 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1740283 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1740283 ']' 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:46.088 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:46.088 [2024-05-15 16:36:53.199515] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:46.088 [2024-05-15 16:36:53.200563] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:15:46.088 [2024-05-15 16:36:53.200618] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.088 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.088 [2024-05-15 16:36:53.272821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.346 [2024-05-15 16:36:53.360082] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.346 [2024-05-15 16:36:53.360146] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.346 [2024-05-15 16:36:53.360162] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.346 [2024-05-15 16:36:53.360176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.346 [2024-05-15 16:36:53.360188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.346 [2024-05-15 16:36:53.360268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.346 [2024-05-15 16:36:53.360351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.346 [2024-05-15 16:36:53.360436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.346 [2024-05-15 16:36:53.360439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.346 [2024-05-15 16:36:53.463693] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:46.346 [2024-05-15 16:36:53.463893] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:46.346 [2024-05-15 16:36:53.464204] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:46.346 [2024-05-15 16:36:53.464745] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:46.346 [2024-05-15 16:36:53.464991] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:46.346 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.346 16:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:46.346 16:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:47.278 16:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:47.536 16:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:47.536 16:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:47.536 16:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.536 16:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:47.536 16:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:48.103 Malloc1 00:15:48.103 16:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:48.361 16:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:48.663 16:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:48.664 [2024-05-15 16:36:55.840977] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:48.922 16:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.922 16:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:48.922 16:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:48.922 Malloc2 00:15:48.922 16:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:49.179 16:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:49.436 16:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1740283 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1740283 ']' 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1740283 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1740283 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1740283' 00:15:49.693 killing process with pid 1740283 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1740283 00:15:49.693 [2024-05-15 16:36:56.896447] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:49.693 16:36:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1740283 00:15:49.950 16:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:49.950 16:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:49.950 00:15:49.950 real 0m52.698s 00:15:49.950 user 3m27.806s 00:15:49.950 sys 0m4.500s 00:15:49.950 16:36:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:49.950 16:36:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:49.950 ************************************ 00:15:49.950 END TEST nvmf_vfio_user 00:15:49.950 ************************************ 00:15:50.208 16:36:57 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.208 16:36:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:50.208 16:36:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.208 16:36:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.208 ************************************ 00:15:50.208 START TEST nvmf_vfio_user_nvme_compliance 00:15:50.208 ************************************ 00:15:50.208 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.209 * Looking for test storage... 00:15:50.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1740880 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1740880' 00:15:50.209 Process pid: 1740880 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1740880 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1740880 ']' 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.209 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.209 [2024-05-15 16:36:57.334938] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:15:50.209 [2024-05-15 16:36:57.335013] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.209 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.209 [2024-05-15 16:36:57.400352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.467 [2024-05-15 16:36:57.482658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.467 [2024-05-15 16:36:57.482708] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.467 [2024-05-15 16:36:57.482724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.467 [2024-05-15 16:36:57.482736] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.467 [2024-05-15 16:36:57.482748] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.467 [2024-05-15 16:36:57.482823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.467 [2024-05-15 16:36:57.482892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.467 [2024-05-15 16:36:57.482889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.467 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.467 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:50.467 16:36:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.399 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.657 malloc0 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.657 [2024-05-15 16:36:58.664434] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.657 16:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:51.657 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.657 00:15:51.657 00:15:51.657 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.657 http://cunit.sourceforge.net/ 00:15:51.657 00:15:51.657 00:15:51.657 Suite: nvme_compliance 00:15:51.657 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 16:36:58.833718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.657 [2024-05-15 16:36:58.835113] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:51.657 [2024-05-15 16:36:58.835137] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:51.657 [2024-05-15 16:36:58.835165] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:51.657 [2024-05-15 16:36:58.836737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.657 passed 00:15:51.914 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 16:36:58.921306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.914 [2024-05-15 16:36:58.924331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.914 passed 00:15:51.914 Test: admin_identify_ns ...[2024-05-15 16:36:59.010767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.914 [2024-05-15 16:36:59.070247] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:51.914 [2024-05-15 16:36:59.078244] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:51.914 [2024-05-15 16:36:59.099356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.914 passed 00:15:52.171 Test: admin_get_features_mandatory_features ...[2024-05-15 16:36:59.183541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.171 [2024-05-15 16:36:59.187549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.171 passed 00:15:52.171 Test: admin_get_features_optional_features ...[2024-05-15 16:36:59.272092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.171 [2024-05-15 16:36:59.275112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.171 passed 00:15:52.171 Test: admin_set_features_number_of_queues ...[2024-05-15 16:36:59.360419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.429 [2024-05-15 16:36:59.465321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.429 passed 00:15:52.429 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 16:36:59.549153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.429 [2024-05-15 16:36:59.552178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.429 passed 00:15:52.429 Test: admin_get_log_page_with_lpo ...[2024-05-15 16:36:59.637390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.686 [2024-05-15 16:36:59.705233] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:52.686 [2024-05-15 16:36:59.718309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.686 passed 00:15:52.686 Test: fabric_property_get ...[2024-05-15 16:36:59.802019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.686 [2024-05-15 16:36:59.803299] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:52.686 [2024-05-15 16:36:59.805040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.686 passed 00:15:52.686 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 16:36:59.886599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.686 [2024-05-15 16:36:59.887857] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:52.686 [2024-05-15 16:36:59.889616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.943 passed 00:15:52.943 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 16:36:59.974786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.943 [2024-05-15 16:37:00.062246] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:52.943 [2024-05-15 16:37:00.078240] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:52.943 [2024-05-15 16:37:00.086485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.943 passed 00:15:52.943 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 16:37:00.169177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.943 [2024-05-15 16:37:00.170542] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:53.200 [2024-05-15 16:37:00.174213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.200 passed 00:15:53.200 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 16:37:00.261043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.200 [2024-05-15 16:37:00.336237] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.200 [2024-05-15 16:37:00.360246] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.200 [2024-05-15 16:37:00.365469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.200 passed 00:15:53.485 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 16:37:00.451811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.485 [2024-05-15 16:37:00.453108] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:53.485 [2024-05-15 16:37:00.453163] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:53.485 [2024-05-15 16:37:00.454850] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.485 passed 00:15:53.485 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 16:37:00.541182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.485 [2024-05-15 16:37:00.634228] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:53.485 [2024-05-15 16:37:00.642228] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:53.485 [2024-05-15 16:37:00.650225] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:53.485 [2024-05-15 16:37:00.658225] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:53.485 [2024-05-15 16:37:00.687465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.743 passed 00:15:53.743 Test: admin_create_io_sq_verify_pc ...[2024-05-15 16:37:00.769915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.743 [2024-05-15 16:37:00.785236] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:53.743 [2024-05-15 16:37:00.803259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.743 passed 00:15:53.743 Test: admin_create_io_qp_max_qps ...[2024-05-15 16:37:00.887837] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.115 [2024-05-15 16:37:01.977232] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:55.372 [2024-05-15 16:37:02.373467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.372 passed 00:15:55.372 Test: admin_create_io_sq_shared_cq ...[2024-05-15 16:37:02.458814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.372 [2024-05-15 16:37:02.587253] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:55.630 [2024-05-15 16:37:02.624310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.630 passed 00:15:55.630 00:15:55.630 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.630 suites 1 1 n/a 0 0 00:15:55.630 tests 18 18 18 0 0 00:15:55.630 asserts 360 360 360 0 n/a 00:15:55.630 00:15:55.630 Elapsed time = 1.572 seconds 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1740880 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1740880 ']' 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1740880 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1740880 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1740880' 00:15:55.630 killing process with pid 1740880 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1740880 00:15:55.630 [2024-05-15 16:37:02.708952] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:55.630 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1740880 00:15:55.887 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:55.887 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:55.887 00:15:55.887 real 0m5.732s 00:15:55.887 user 0m16.104s 00:15:55.887 sys 0m0.585s 00:15:55.887 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.887 16:37:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.887 ************************************ 00:15:55.887 END TEST nvmf_vfio_user_nvme_compliance 00:15:55.887 ************************************ 00:15:55.887 16:37:02 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:55.887 16:37:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:55.887 16:37:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.887 16:37:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.887 ************************************ 00:15:55.887 START TEST nvmf_vfio_user_fuzz 00:15:55.887 ************************************ 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:55.887 * Looking for test storage... 00:15:55.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1741601 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1741601' 00:15:55.887 Process pid: 1741601 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1741601 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1741601 ']' 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:55.887 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.451 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:56.451 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:56.451 16:37:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 malloc0 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:57.382 16:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:29.435 Fuzzing completed. Shutting down the fuzz application 00:16:29.435 00:16:29.435 Dumping successful admin opcodes: 00:16:29.435 8, 9, 10, 24, 00:16:29.435 Dumping successful io opcodes: 00:16:29.435 0, 00:16:29.435 NS: 0x200003a1ef00 I/O qp, Total commands completed: 538581, total successful commands: 2077, random_seed: 2379371904 00:16:29.435 NS: 0x200003a1ef00 admin qp, Total commands completed: 117724, total successful commands: 964, random_seed: 2715447232 00:16:29.435 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:29.435 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.435 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1741601 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1741601 ']' 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1741601 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1741601 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1741601' 00:16:29.436 killing process with pid 1741601 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1741601 00:16:29.436 16:37:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1741601 00:16:29.436 16:37:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:29.436 16:37:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:29.436 00:16:29.436 real 0m32.254s 00:16:29.436 user 0m31.230s 00:16:29.436 sys 0m28.342s 00:16:29.436 16:37:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:29.436 16:37:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.436 ************************************ 00:16:29.436 END TEST nvmf_vfio_user_fuzz 00:16:29.436 ************************************ 00:16:29.436 16:37:35 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:29.436 16:37:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:29.436 16:37:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:29.436 16:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.436 ************************************ 00:16:29.436 START TEST nvmf_host_management 00:16:29.436 ************************************ 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:29.436 * Looking for test storage... 00:16:29.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.436 16:37:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:30.811 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:30.811 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:30.811 Found net devices under 0000:09:00.0: cvl_0_0 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:30.811 Found net devices under 0000:09:00.1: cvl_0_1 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:16:30.811 00:16:30.811 --- 10.0.0.2 ping statistics --- 00:16:30.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.811 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:16:30.811 00:16:30.811 --- 10.0.0.1 ping statistics --- 00:16:30.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.811 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1747331 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1747331 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1747331 ']' 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:30.811 16:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 [2024-05-15 16:37:38.017443] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:16:30.811 [2024-05-15 16:37:38.017525] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.069 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.069 [2024-05-15 16:37:38.091014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.069 [2024-05-15 16:37:38.178130] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.069 [2024-05-15 16:37:38.178190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.069 [2024-05-15 16:37:38.178202] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.069 [2024-05-15 16:37:38.178213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.069 [2024-05-15 16:37:38.178248] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.069 [2024-05-15 16:37:38.178338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.069 [2024-05-15 16:37:38.178403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.069 [2024-05-15 16:37:38.178469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.069 [2024-05-15 16:37:38.178471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.327 [2024-05-15 16:37:38.325714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.327 Malloc0 00:16:31.327 [2024-05-15 16:37:38.384038] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:31.327 [2024-05-15 16:37:38.384382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1747382 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1747382 /var/tmp/bdevperf.sock 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1747382 ']' 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:31.327 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:31.327 { 00:16:31.327 "params": { 00:16:31.327 "name": "Nvme$subsystem", 00:16:31.327 "trtype": "$TEST_TRANSPORT", 00:16:31.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.328 "adrfam": "ipv4", 00:16:31.328 "trsvcid": "$NVMF_PORT", 00:16:31.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.328 "hdgst": ${hdgst:-false}, 00:16:31.328 "ddgst": ${ddgst:-false} 00:16:31.328 }, 00:16:31.328 "method": "bdev_nvme_attach_controller" 00:16:31.328 } 00:16:31.328 EOF 00:16:31.328 )") 00:16:31.328 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:31.328 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:31.328 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:31.328 16:37:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:31.328 "params": { 00:16:31.328 "name": "Nvme0", 00:16:31.328 "trtype": "tcp", 00:16:31.328 "traddr": "10.0.0.2", 00:16:31.328 "adrfam": "ipv4", 00:16:31.328 "trsvcid": "4420", 00:16:31.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:31.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:31.328 "hdgst": false, 00:16:31.328 "ddgst": false 00:16:31.328 }, 00:16:31.328 "method": "bdev_nvme_attach_controller" 00:16:31.328 }' 00:16:31.328 [2024-05-15 16:37:38.454122] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:16:31.328 [2024-05-15 16:37:38.454202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747382 ] 00:16:31.328 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.328 [2024-05-15 16:37:38.525032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.586 [2024-05-15 16:37:38.609676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.844 Running I/O for 10 seconds... 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:31.844 16:37:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=526 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 526 -ge 100 ']' 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.103 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.103 [2024-05-15 16:37:39.231546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.103 [2024-05-15 16:37:39.231617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.103 [2024-05-15 16:37:39.231632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 [2024-05-15 16:37:39.231778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c8b0 is same with the state(5) to be set 00:16:32.104 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.104 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:32.104 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.104 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.104 16:37:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.104 16:37:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:32.104 [2024-05-15 16:37:39.246423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.246973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.246988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.104 [2024-05-15 16:37:39.247585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.104 [2024-05-15 16:37:39.247601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.247973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.247988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-05-15 16:37:39.248516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:32.105 [2024-05-15 16:37:39.248646] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8a61f0 was disconnected and freed. reset controller. 00:16:32.105 [2024-05-15 16:37:39.248725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.105 [2024-05-15 16:37:39.248762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.105 [2024-05-15 16:37:39.248792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.105 [2024-05-15 16:37:39.248820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.105 [2024-05-15 16:37:39.248848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.105 [2024-05-15 16:37:39.248863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8abdc0 is same with the state(5) to be set 00:16:32.105 [2024-05-15 16:37:39.249992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:32.105 task offset: 81920 on job bdev=Nvme0n1 fails 00:16:32.105 00:16:32.105 Latency(us) 00:16:32.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.105 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:32.105 Job: Nvme0n1 ended in about 0.42 seconds with error 00:16:32.105 Verification LBA range: start 0x0 length 0x400 00:16:32.105 Nvme0n1 : 0.42 1526.99 95.44 152.70 0.00 37049.81 2852.03 33787.45 00:16:32.105 =================================================================================================================== 00:16:32.105 Total : 1526.99 95.44 152.70 0.00 37049.81 2852.03 33787.45 00:16:32.105 [2024-05-15 16:37:39.251852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:32.105 [2024-05-15 16:37:39.251880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abdc0 (9): Bad file descriptor 00:16:32.105 [2024-05-15 16:37:39.265548] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1747382 00:16:33.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1747382) - No such process 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:33.038 { 00:16:33.038 "params": { 00:16:33.038 "name": "Nvme$subsystem", 00:16:33.038 "trtype": "$TEST_TRANSPORT", 00:16:33.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.038 "adrfam": "ipv4", 00:16:33.038 "trsvcid": "$NVMF_PORT", 00:16:33.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.038 "hdgst": ${hdgst:-false}, 00:16:33.038 "ddgst": ${ddgst:-false} 00:16:33.038 }, 00:16:33.038 "method": "bdev_nvme_attach_controller" 00:16:33.038 } 00:16:33.038 EOF 00:16:33.038 )") 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:33.038 16:37:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:33.038 "params": { 00:16:33.038 "name": "Nvme0", 00:16:33.038 "trtype": "tcp", 00:16:33.038 "traddr": "10.0.0.2", 00:16:33.038 "adrfam": "ipv4", 00:16:33.038 "trsvcid": "4420", 00:16:33.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:33.038 "hdgst": false, 00:16:33.038 "ddgst": false 00:16:33.038 }, 00:16:33.038 "method": "bdev_nvme_attach_controller" 00:16:33.038 }' 00:16:33.296 [2024-05-15 16:37:40.289973] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:16:33.296 [2024-05-15 16:37:40.290089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747659 ] 00:16:33.296 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.296 [2024-05-15 16:37:40.361462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.296 [2024-05-15 16:37:40.448695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.553 Running I/O for 1 seconds... 00:16:34.485 00:16:34.485 Latency(us) 00:16:34.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.485 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.485 Verification LBA range: start 0x0 length 0x400 00:16:34.485 Nvme0n1 : 1.03 1616.21 101.01 0.00 0.00 38971.02 8446.86 33010.73 00:16:34.485 =================================================================================================================== 00:16:34.485 Total : 1616.21 101.01 0.00 0.00 38971.02 8446.86 33010.73 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.743 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.743 rmmod nvme_tcp 00:16:34.743 rmmod nvme_fabrics 00:16:35.001 rmmod nvme_keyring 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1747331 ']' 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1747331 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1747331 ']' 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1747331 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:35.001 16:37:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:35.001 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1747331 00:16:35.001 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:35.001 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:35.001 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1747331' 00:16:35.001 killing process with pid 1747331 00:16:35.001 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1747331 00:16:35.001 [2024-05-15 16:37:42.028482] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:35.001 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1747331 00:16:35.259 [2024-05-15 16:37:42.247388] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.259 16:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.212 16:37:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:37.212 16:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:37.212 00:16:37.212 real 0m9.001s 00:16:37.212 user 0m19.334s 00:16:37.212 sys 0m2.870s 00:16:37.212 16:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:37.212 16:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.213 ************************************ 00:16:37.213 END TEST nvmf_host_management 00:16:37.213 ************************************ 00:16:37.213 16:37:44 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:37.213 16:37:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:37.213 16:37:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.213 16:37:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.213 ************************************ 00:16:37.213 START TEST nvmf_lvol 00:16:37.213 ************************************ 00:16:37.213 16:37:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:37.471 * Looking for test storage... 00:16:37.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.471 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.472 16:37:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:40.002 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:40.002 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:40.002 Found net devices under 0000:09:00.0: cvl_0_0 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:40.002 Found net devices under 0000:09:00.1: cvl_0_1 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.002 16:37:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.002 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:16:40.003 00:16:40.003 --- 10.0.0.2 ping statistics --- 00:16:40.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.003 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:16:40.003 00:16:40.003 --- 10.0.0.1 ping statistics --- 00:16:40.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.003 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1750144 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1750144 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1750144 ']' 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.003 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:40.003 [2024-05-15 16:37:47.174414] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:16:40.003 [2024-05-15 16:37:47.174485] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.003 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.261 [2024-05-15 16:37:47.250108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.261 [2024-05-15 16:37:47.339066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.261 [2024-05-15 16:37:47.339131] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.261 [2024-05-15 16:37:47.339159] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.261 [2024-05-15 16:37:47.339174] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.261 [2024-05-15 16:37:47.339187] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.261 [2024-05-15 16:37:47.339289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.261 [2024-05-15 16:37:47.339312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.261 [2024-05-15 16:37:47.339315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.262 16:37:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:40.519 [2024-05-15 16:37:47.681642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.519 16:37:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.777 16:37:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:40.777 16:37:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:41.035 16:37:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:41.035 16:37:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:41.293 16:37:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:41.551 16:37:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fadd24d1-0c62-4b4d-83ae-5deb4c87ee35 00:16:41.551 16:37:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fadd24d1-0c62-4b4d-83ae-5deb4c87ee35 lvol 20 00:16:41.809 16:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=451b6e62-e1d1-40c3-bee5-844283cc170a 00:16:41.809 16:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:42.067 16:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 451b6e62-e1d1-40c3-bee5-844283cc170a 00:16:42.324 16:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:42.581 [2024-05-15 16:37:49.738478] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:42.582 [2024-05-15 16:37:49.738820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.582 16:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:42.840 16:37:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1750513 00:16:42.840 16:37:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:42.840 16:37:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:42.840 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.214 16:37:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 451b6e62-e1d1-40c3-bee5-844283cc170a MY_SNAPSHOT 00:16:44.214 16:37:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9fea10d7-bfa2-4fb7-98ce-1655a7563fb4 00:16:44.214 16:37:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 451b6e62-e1d1-40c3-bee5-844283cc170a 30 00:16:44.478 16:37:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9fea10d7-bfa2-4fb7-98ce-1655a7563fb4 MY_CLONE 00:16:44.737 16:37:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1e73ebed-4ea0-4c57-a133-83a9e8752f3b 00:16:44.737 16:37:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1e73ebed-4ea0-4c57-a133-83a9e8752f3b 00:16:45.302 16:37:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1750513 00:16:53.406 Initializing NVMe Controllers 00:16:53.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:53.406 Controller IO queue size 128, less than required. 00:16:53.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:53.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:53.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:53.406 Initialization complete. Launching workers. 00:16:53.406 ======================================================== 00:16:53.406 Latency(us) 00:16:53.406 Device Information : IOPS MiB/s Average min max 00:16:53.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10028.80 39.17 12765.96 2579.03 68347.28 00:16:53.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10458.50 40.85 12243.42 2681.84 61530.18 00:16:53.406 ======================================================== 00:16:53.406 Total : 20487.30 80.03 12499.21 2579.03 68347.28 00:16:53.406 00:16:53.406 16:38:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:53.663 16:38:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 451b6e62-e1d1-40c3-bee5-844283cc170a 00:16:53.921 16:38:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fadd24d1-0c62-4b4d-83ae-5deb4c87ee35 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.178 rmmod nvme_tcp 00:16:54.178 rmmod nvme_fabrics 00:16:54.178 rmmod nvme_keyring 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1750144 ']' 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1750144 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1750144 ']' 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1750144 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1750144 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1750144' 00:16:54.178 killing process with pid 1750144 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1750144 00:16:54.178 [2024-05-15 16:38:01.285449] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:54.178 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1750144 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.436 16:38:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.968 00:16:56.968 real 0m19.240s 00:16:56.968 user 1m1.811s 00:16:56.968 sys 0m6.834s 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:56.968 ************************************ 00:16:56.968 END TEST nvmf_lvol 00:16:56.968 ************************************ 00:16:56.968 16:38:03 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:56.968 16:38:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:56.968 16:38:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:56.968 16:38:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.968 ************************************ 00:16:56.968 START TEST nvmf_lvs_grow 00:16:56.968 ************************************ 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:56.968 * Looking for test storage... 00:16:56.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.968 16:38:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.969 16:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:59.497 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.497 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:59.498 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:59.498 Found net devices under 0000:09:00.0: cvl_0_0 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:59.498 Found net devices under 0000:09:00.1: cvl_0_1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:16:59.498 00:16:59.498 --- 10.0.0.2 ping statistics --- 00:16:59.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.498 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:16:59.498 00:16:59.498 --- 10.0.0.1 ping statistics --- 00:16:59.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.498 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1754117 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1754117 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1754117 ']' 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:59.498 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.498 [2024-05-15 16:38:06.449413] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:16:59.498 [2024-05-15 16:38:06.449497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.498 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.498 [2024-05-15 16:38:06.523325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.498 [2024-05-15 16:38:06.608126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.498 [2024-05-15 16:38:06.608182] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.498 [2024-05-15 16:38:06.608195] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.498 [2024-05-15 16:38:06.608207] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.498 [2024-05-15 16:38:06.608223] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.498 [2024-05-15 16:38:06.608269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.755 16:38:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.014 [2024-05-15 16:38:07.023382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:00.014 ************************************ 00:17:00.014 START TEST lvs_grow_clean 00:17:00.014 ************************************ 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:00.014 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:00.272 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:00.272 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:00.529 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=188de9fd-168b-41a9-ab42-5b23740b2359 00:17:00.529 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:00.529 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:00.831 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:00.831 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:00.831 16:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 188de9fd-168b-41a9-ab42-5b23740b2359 lvol 150 00:17:01.089 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=808fc8b2-e63a-4c0b-bdf2-3d18d6979d60 00:17:01.089 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:01.089 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:01.347 [2024-05-15 16:38:08.443634] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:01.347 [2024-05-15 16:38:08.443737] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:01.347 true 00:17:01.347 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:01.347 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:01.605 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:01.605 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:01.863 16:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 808fc8b2-e63a-4c0b-bdf2-3d18d6979d60 00:17:02.122 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:02.379 [2024-05-15 16:38:09.538770] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:02.379 [2024-05-15 16:38:09.539073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.379 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1754559 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1754559 /var/tmp/bdevperf.sock 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1754559 ']' 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.637 16:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:02.637 [2024-05-15 16:38:09.838755] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:02.637 [2024-05-15 16:38:09.838825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754559 ] 00:17:02.895 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.895 [2024-05-15 16:38:09.910455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.895 [2024-05-15 16:38:09.997709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.895 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.895 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:02.895 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:03.460 Nvme0n1 00:17:03.460 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:03.719 [ 00:17:03.719 { 00:17:03.719 "name": "Nvme0n1", 00:17:03.719 "aliases": [ 00:17:03.719 "808fc8b2-e63a-4c0b-bdf2-3d18d6979d60" 00:17:03.719 ], 00:17:03.719 "product_name": "NVMe disk", 00:17:03.719 "block_size": 4096, 00:17:03.719 "num_blocks": 38912, 00:17:03.719 "uuid": "808fc8b2-e63a-4c0b-bdf2-3d18d6979d60", 00:17:03.719 "assigned_rate_limits": { 00:17:03.719 "rw_ios_per_sec": 0, 00:17:03.719 "rw_mbytes_per_sec": 0, 00:17:03.719 "r_mbytes_per_sec": 0, 00:17:03.719 "w_mbytes_per_sec": 0 00:17:03.719 }, 00:17:03.719 "claimed": false, 00:17:03.719 "zoned": false, 00:17:03.719 "supported_io_types": { 00:17:03.719 "read": true, 00:17:03.719 "write": true, 00:17:03.719 "unmap": true, 00:17:03.719 "write_zeroes": true, 00:17:03.719 "flush": true, 00:17:03.719 "reset": true, 00:17:03.719 "compare": true, 00:17:03.719 "compare_and_write": true, 00:17:03.719 "abort": true, 00:17:03.719 "nvme_admin": true, 00:17:03.719 "nvme_io": true 00:17:03.719 }, 00:17:03.719 "memory_domains": [ 00:17:03.719 { 00:17:03.719 "dma_device_id": "system", 00:17:03.719 "dma_device_type": 1 00:17:03.719 } 00:17:03.719 ], 00:17:03.719 "driver_specific": { 00:17:03.719 "nvme": [ 00:17:03.719 { 00:17:03.719 "trid": { 00:17:03.719 "trtype": "TCP", 00:17:03.719 "adrfam": "IPv4", 00:17:03.719 "traddr": "10.0.0.2", 00:17:03.719 "trsvcid": "4420", 00:17:03.719 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:03.719 }, 00:17:03.719 "ctrlr_data": { 00:17:03.719 "cntlid": 1, 00:17:03.719 "vendor_id": "0x8086", 00:17:03.719 "model_number": "SPDK bdev Controller", 00:17:03.719 "serial_number": "SPDK0", 00:17:03.719 "firmware_revision": "24.05", 00:17:03.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:03.719 "oacs": { 00:17:03.719 "security": 0, 00:17:03.719 "format": 0, 00:17:03.719 "firmware": 0, 00:17:03.719 "ns_manage": 0 00:17:03.719 }, 00:17:03.719 "multi_ctrlr": true, 00:17:03.719 "ana_reporting": false 00:17:03.719 }, 00:17:03.719 "vs": { 00:17:03.719 "nvme_version": "1.3" 00:17:03.719 }, 00:17:03.719 "ns_data": { 00:17:03.719 "id": 1, 00:17:03.719 "can_share": true 00:17:03.719 } 00:17:03.719 } 00:17:03.719 ], 00:17:03.719 "mp_policy": "active_passive" 00:17:03.719 } 00:17:03.719 } 00:17:03.719 ] 00:17:03.719 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1754692 00:17:03.719 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:03.719 16:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:03.978 Running I/O for 10 seconds... 00:17:04.911 Latency(us) 00:17:04.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.911 Nvme0n1 : 1.00 13667.00 53.39 0.00 0.00 0.00 0.00 0.00 00:17:04.911 =================================================================================================================== 00:17:04.911 Total : 13667.00 53.39 0.00 0.00 0.00 0.00 0.00 00:17:04.911 00:17:05.848 16:38:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:05.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.848 Nvme0n1 : 2.00 14176.00 55.38 0.00 0.00 0.00 0.00 0.00 00:17:05.848 =================================================================================================================== 00:17:05.848 Total : 14176.00 55.38 0.00 0.00 0.00 0.00 0.00 00:17:05.848 00:17:06.108 true 00:17:06.108 16:38:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:06.108 16:38:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:06.375 16:38:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:06.375 16:38:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:06.375 16:38:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1754692 00:17:06.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.941 Nvme0n1 : 3.00 14282.67 55.79 0.00 0.00 0.00 0.00 0.00 00:17:06.941 =================================================================================================================== 00:17:06.941 Total : 14282.67 55.79 0.00 0.00 0.00 0.00 0.00 00:17:06.941 00:17:07.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.875 Nvme0n1 : 4.00 14368.50 56.13 0.00 0.00 0.00 0.00 0.00 00:17:07.875 =================================================================================================================== 00:17:07.875 Total : 14368.50 56.13 0.00 0.00 0.00 0.00 0.00 00:17:07.875 00:17:08.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.810 Nvme0n1 : 5.00 14505.00 56.66 0.00 0.00 0.00 0.00 0.00 00:17:08.810 =================================================================================================================== 00:17:08.810 Total : 14505.00 56.66 0.00 0.00 0.00 0.00 0.00 00:17:08.810 00:17:10.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.185 Nvme0n1 : 6.00 14500.50 56.64 0.00 0.00 0.00 0.00 0.00 00:17:10.185 =================================================================================================================== 00:17:10.185 Total : 14500.50 56.64 0.00 0.00 0.00 0.00 0.00 00:17:10.185 00:17:11.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.120 Nvme0n1 : 7.00 14579.14 56.95 0.00 0.00 0.00 0.00 0.00 00:17:11.120 =================================================================================================================== 00:17:11.120 Total : 14579.14 56.95 0.00 0.00 0.00 0.00 0.00 00:17:11.120 00:17:12.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.057 Nvme0n1 : 8.00 14617.12 57.10 0.00 0.00 0.00 0.00 0.00 00:17:12.057 =================================================================================================================== 00:17:12.057 Total : 14617.12 57.10 0.00 0.00 0.00 0.00 0.00 00:17:12.057 00:17:12.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.991 Nvme0n1 : 9.00 14629.89 57.15 0.00 0.00 0.00 0.00 0.00 00:17:12.991 =================================================================================================================== 00:17:12.991 Total : 14629.89 57.15 0.00 0.00 0.00 0.00 0.00 00:17:12.991 00:17:13.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.927 Nvme0n1 : 10.00 14661.10 57.27 0.00 0.00 0.00 0.00 0.00 00:17:13.927 =================================================================================================================== 00:17:13.927 Total : 14661.10 57.27 0.00 0.00 0.00 0.00 0.00 00:17:13.927 00:17:13.927 00:17:13.927 Latency(us) 00:17:13.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.927 Nvme0n1 : 10.00 14667.63 57.30 0.00 0.00 8721.41 4878.79 19515.16 00:17:13.927 =================================================================================================================== 00:17:13.927 Total : 14667.63 57.30 0.00 0.00 8721.41 4878.79 19515.16 00:17:13.927 0 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1754559 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1754559 ']' 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1754559 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1754559 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1754559' 00:17:13.927 killing process with pid 1754559 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1754559 00:17:13.927 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.927 00:17:13.927 Latency(us) 00:17:13.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.927 =================================================================================================================== 00:17:13.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.927 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1754559 00:17:14.185 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.443 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:14.701 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:14.701 16:38:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:14.959 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:14.959 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:14.959 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:15.525 [2024-05-15 16:38:22.444246] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:15.525 request: 00:17:15.525 { 00:17:15.525 "uuid": "188de9fd-168b-41a9-ab42-5b23740b2359", 00:17:15.525 "method": "bdev_lvol_get_lvstores", 00:17:15.525 "req_id": 1 00:17:15.525 } 00:17:15.525 Got JSON-RPC error response 00:17:15.525 response: 00:17:15.525 { 00:17:15.525 "code": -19, 00:17:15.525 "message": "No such device" 00:17:15.525 } 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:15.525 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:15.834 aio_bdev 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 808fc8b2-e63a-4c0b-bdf2-3d18d6979d60 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=808fc8b2-e63a-4c0b-bdf2-3d18d6979d60 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:15.835 16:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:16.094 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 808fc8b2-e63a-4c0b-bdf2-3d18d6979d60 -t 2000 00:17:16.352 [ 00:17:16.352 { 00:17:16.352 "name": "808fc8b2-e63a-4c0b-bdf2-3d18d6979d60", 00:17:16.352 "aliases": [ 00:17:16.352 "lvs/lvol" 00:17:16.352 ], 00:17:16.352 "product_name": "Logical Volume", 00:17:16.352 "block_size": 4096, 00:17:16.352 "num_blocks": 38912, 00:17:16.352 "uuid": "808fc8b2-e63a-4c0b-bdf2-3d18d6979d60", 00:17:16.352 "assigned_rate_limits": { 00:17:16.352 "rw_ios_per_sec": 0, 00:17:16.352 "rw_mbytes_per_sec": 0, 00:17:16.352 "r_mbytes_per_sec": 0, 00:17:16.352 "w_mbytes_per_sec": 0 00:17:16.352 }, 00:17:16.352 "claimed": false, 00:17:16.352 "zoned": false, 00:17:16.352 "supported_io_types": { 00:17:16.352 "read": true, 00:17:16.352 "write": true, 00:17:16.352 "unmap": true, 00:17:16.352 "write_zeroes": true, 00:17:16.352 "flush": false, 00:17:16.352 "reset": true, 00:17:16.352 "compare": false, 00:17:16.352 "compare_and_write": false, 00:17:16.352 "abort": false, 00:17:16.352 "nvme_admin": false, 00:17:16.352 "nvme_io": false 00:17:16.352 }, 00:17:16.352 "driver_specific": { 00:17:16.352 "lvol": { 00:17:16.352 "lvol_store_uuid": "188de9fd-168b-41a9-ab42-5b23740b2359", 00:17:16.352 "base_bdev": "aio_bdev", 00:17:16.352 "thin_provision": false, 00:17:16.352 "num_allocated_clusters": 38, 00:17:16.352 "snapshot": false, 00:17:16.352 "clone": false, 00:17:16.352 "esnap_clone": false 00:17:16.352 } 00:17:16.352 } 00:17:16.352 } 00:17:16.352 ] 00:17:16.352 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:16.352 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:16.352 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:16.610 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:16.610 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:16.610 16:38:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:16.868 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:16.868 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 808fc8b2-e63a-4c0b-bdf2-3d18d6979d60 00:17:17.126 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 188de9fd-168b-41a9-ab42-5b23740b2359 00:17:17.384 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:17.643 00:17:17.643 real 0m17.754s 00:17:17.643 user 0m17.191s 00:17:17.643 sys 0m1.962s 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:17.643 ************************************ 00:17:17.643 END TEST lvs_grow_clean 00:17:17.643 ************************************ 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.643 16:38:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:17.901 ************************************ 00:17:17.901 START TEST lvs_grow_dirty 00:17:17.901 ************************************ 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:17.901 16:38:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:18.159 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:18.159 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:18.417 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:18.417 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:18.417 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:18.674 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:18.674 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:18.674 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 lvol 150 00:17:18.932 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2aeada42-6070-4756-b73b-531964c94e28 00:17:18.932 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.932 16:38:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:19.190 [2024-05-15 16:38:26.187612] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:19.190 [2024-05-15 16:38:26.187694] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:19.190 true 00:17:19.190 16:38:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:19.190 16:38:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:19.448 16:38:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:19.448 16:38:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:19.706 16:38:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2aeada42-6070-4756-b73b-531964c94e28 00:17:19.964 16:38:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:20.221 [2024-05-15 16:38:27.230766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.222 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1756723 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1756723 /var/tmp/bdevperf.sock 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1756723 ']' 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:20.479 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.480 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:20.480 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:20.480 [2024-05-15 16:38:27.544977] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:20.480 [2024-05-15 16:38:27.545048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756723 ] 00:17:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.480 [2024-05-15 16:38:27.616189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.480 [2024-05-15 16:38:27.703845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.738 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.738 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:20.738 16:38:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:21.303 Nvme0n1 00:17:21.303 16:38:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:21.561 [ 00:17:21.561 { 00:17:21.561 "name": "Nvme0n1", 00:17:21.561 "aliases": [ 00:17:21.561 "2aeada42-6070-4756-b73b-531964c94e28" 00:17:21.561 ], 00:17:21.561 "product_name": "NVMe disk", 00:17:21.561 "block_size": 4096, 00:17:21.561 "num_blocks": 38912, 00:17:21.562 "uuid": "2aeada42-6070-4756-b73b-531964c94e28", 00:17:21.562 "assigned_rate_limits": { 00:17:21.562 "rw_ios_per_sec": 0, 00:17:21.562 "rw_mbytes_per_sec": 0, 00:17:21.562 "r_mbytes_per_sec": 0, 00:17:21.562 "w_mbytes_per_sec": 0 00:17:21.562 }, 00:17:21.562 "claimed": false, 00:17:21.562 "zoned": false, 00:17:21.562 "supported_io_types": { 00:17:21.562 "read": true, 00:17:21.562 "write": true, 00:17:21.562 "unmap": true, 00:17:21.562 "write_zeroes": true, 00:17:21.562 "flush": true, 00:17:21.562 "reset": true, 00:17:21.562 "compare": true, 00:17:21.562 "compare_and_write": true, 00:17:21.562 "abort": true, 00:17:21.562 "nvme_admin": true, 00:17:21.562 "nvme_io": true 00:17:21.562 }, 00:17:21.562 "memory_domains": [ 00:17:21.562 { 00:17:21.562 "dma_device_id": "system", 00:17:21.562 "dma_device_type": 1 00:17:21.562 } 00:17:21.562 ], 00:17:21.562 "driver_specific": { 00:17:21.562 "nvme": [ 00:17:21.562 { 00:17:21.562 "trid": { 00:17:21.562 "trtype": "TCP", 00:17:21.562 "adrfam": "IPv4", 00:17:21.562 "traddr": "10.0.0.2", 00:17:21.562 "trsvcid": "4420", 00:17:21.562 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:21.562 }, 00:17:21.562 "ctrlr_data": { 00:17:21.562 "cntlid": 1, 00:17:21.562 "vendor_id": "0x8086", 00:17:21.562 "model_number": "SPDK bdev Controller", 00:17:21.562 "serial_number": "SPDK0", 00:17:21.562 "firmware_revision": "24.05", 00:17:21.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:21.562 "oacs": { 00:17:21.562 "security": 0, 00:17:21.562 "format": 0, 00:17:21.562 "firmware": 0, 00:17:21.562 "ns_manage": 0 00:17:21.562 }, 00:17:21.562 "multi_ctrlr": true, 00:17:21.562 "ana_reporting": false 00:17:21.562 }, 00:17:21.562 "vs": { 00:17:21.562 "nvme_version": "1.3" 00:17:21.562 }, 00:17:21.562 "ns_data": { 00:17:21.562 "id": 1, 00:17:21.562 "can_share": true 00:17:21.562 } 00:17:21.562 } 00:17:21.562 ], 00:17:21.562 "mp_policy": "active_passive" 00:17:21.562 } 00:17:21.562 } 00:17:21.562 ] 00:17:21.562 16:38:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1756858 00:17:21.562 16:38:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:21.562 16:38:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:21.562 Running I/O for 10 seconds... 00:17:22.497 Latency(us) 00:17:22.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.497 Nvme0n1 : 1.00 14609.00 57.07 0.00 0.00 0.00 0.00 0.00 00:17:22.497 =================================================================================================================== 00:17:22.497 Total : 14609.00 57.07 0.00 0.00 0.00 0.00 0.00 00:17:22.497 00:17:23.432 16:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:23.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.690 Nvme0n1 : 2.00 14543.50 56.81 0.00 0.00 0.00 0.00 0.00 00:17:23.690 =================================================================================================================== 00:17:23.690 Total : 14543.50 56.81 0.00 0.00 0.00 0.00 0.00 00:17:23.690 00:17:23.690 true 00:17:23.690 16:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:23.690 16:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:23.948 16:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:23.948 16:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:23.948 16:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1756858 00:17:24.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.513 Nvme0n1 : 3.00 14587.33 56.98 0.00 0.00 0.00 0.00 0.00 00:17:24.513 =================================================================================================================== 00:17:24.513 Total : 14587.33 56.98 0.00 0.00 0.00 0.00 0.00 00:17:24.513 00:17:25.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.886 Nvme0n1 : 4.00 14785.75 57.76 0.00 0.00 0.00 0.00 0.00 00:17:25.886 =================================================================================================================== 00:17:25.886 Total : 14785.75 57.76 0.00 0.00 0.00 0.00 0.00 00:17:25.886 00:17:26.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.818 Nvme0n1 : 5.00 14890.60 58.17 0.00 0.00 0.00 0.00 0.00 00:17:26.818 =================================================================================================================== 00:17:26.818 Total : 14890.60 58.17 0.00 0.00 0.00 0.00 0.00 00:17:26.818 00:17:27.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.750 Nvme0n1 : 6.00 14941.83 58.37 0.00 0.00 0.00 0.00 0.00 00:17:27.750 =================================================================================================================== 00:17:27.750 Total : 14941.83 58.37 0.00 0.00 0.00 0.00 0.00 00:17:27.750 00:17:28.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.938 Nvme0n1 : 7.00 15049.00 58.79 0.00 0.00 0.00 0.00 0.00 00:17:28.939 =================================================================================================================== 00:17:28.939 Total : 15049.00 58.79 0.00 0.00 0.00 0.00 0.00 00:17:28.939 00:17:29.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.503 Nvme0n1 : 8.00 15057.38 58.82 0.00 0.00 0.00 0.00 0.00 00:17:29.503 =================================================================================================================== 00:17:29.503 Total : 15057.38 58.82 0.00 0.00 0.00 0.00 0.00 00:17:29.503 00:17:30.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.875 Nvme0n1 : 9.00 15120.00 59.06 0.00 0.00 0.00 0.00 0.00 00:17:30.875 =================================================================================================================== 00:17:30.875 Total : 15120.00 59.06 0.00 0.00 0.00 0.00 0.00 00:17:30.875 00:17:31.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.829 Nvme0n1 : 10.00 15164.30 59.24 0.00 0.00 0.00 0.00 0.00 00:17:31.829 =================================================================================================================== 00:17:31.829 Total : 15164.30 59.24 0.00 0.00 0.00 0.00 0.00 00:17:31.829 00:17:31.829 00:17:31.829 Latency(us) 00:17:31.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.829 Nvme0n1 : 10.01 15168.80 59.25 0.00 0.00 8433.07 4733.16 16602.45 00:17:31.829 =================================================================================================================== 00:17:31.829 Total : 15168.80 59.25 0.00 0.00 8433.07 4733.16 16602.45 00:17:31.829 0 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1756723 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1756723 ']' 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1756723 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1756723 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1756723' 00:17:31.829 killing process with pid 1756723 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1756723 00:17:31.829 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.829 00:17:31.829 Latency(us) 00:17:31.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.829 =================================================================================================================== 00:17:31.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1756723 00:17:31.829 16:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:32.102 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:32.359 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:32.359 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:32.617 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:32.617 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:32.617 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1754117 00:17:32.617 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1754117 00:17:32.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1754117 Killed "${NVMF_APP[@]}" "$@" 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1758146 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1758146 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1758146 ']' 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.875 16:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:32.875 [2024-05-15 16:38:39.900853] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:32.875 [2024-05-15 16:38:39.900941] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.875 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.875 [2024-05-15 16:38:39.983589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.875 [2024-05-15 16:38:40.071063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.875 [2024-05-15 16:38:40.071119] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.875 [2024-05-15 16:38:40.071142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.875 [2024-05-15 16:38:40.071153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.875 [2024-05-15 16:38:40.071163] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.875 [2024-05-15 16:38:40.071189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.133 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:33.392 [2024-05-15 16:38:40.445007] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:33.392 [2024-05-15 16:38:40.445135] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:33.392 [2024-05-15 16:38:40.445185] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2aeada42-6070-4756-b73b-531964c94e28 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=2aeada42-6070-4756-b73b-531964c94e28 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:33.392 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:33.649 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2aeada42-6070-4756-b73b-531964c94e28 -t 2000 00:17:33.907 [ 00:17:33.907 { 00:17:33.907 "name": "2aeada42-6070-4756-b73b-531964c94e28", 00:17:33.907 "aliases": [ 00:17:33.907 "lvs/lvol" 00:17:33.908 ], 00:17:33.908 "product_name": "Logical Volume", 00:17:33.908 "block_size": 4096, 00:17:33.908 "num_blocks": 38912, 00:17:33.908 "uuid": "2aeada42-6070-4756-b73b-531964c94e28", 00:17:33.908 "assigned_rate_limits": { 00:17:33.908 "rw_ios_per_sec": 0, 00:17:33.908 "rw_mbytes_per_sec": 0, 00:17:33.908 "r_mbytes_per_sec": 0, 00:17:33.908 "w_mbytes_per_sec": 0 00:17:33.908 }, 00:17:33.908 "claimed": false, 00:17:33.908 "zoned": false, 00:17:33.908 "supported_io_types": { 00:17:33.908 "read": true, 00:17:33.908 "write": true, 00:17:33.908 "unmap": true, 00:17:33.908 "write_zeroes": true, 00:17:33.908 "flush": false, 00:17:33.908 "reset": true, 00:17:33.908 "compare": false, 00:17:33.908 "compare_and_write": false, 00:17:33.908 "abort": false, 00:17:33.908 "nvme_admin": false, 00:17:33.908 "nvme_io": false 00:17:33.908 }, 00:17:33.908 "driver_specific": { 00:17:33.908 "lvol": { 00:17:33.908 "lvol_store_uuid": "dc5538ae-ff66-4fc8-88c3-c47c4c9a3081", 00:17:33.908 "base_bdev": "aio_bdev", 00:17:33.908 "thin_provision": false, 00:17:33.908 "num_allocated_clusters": 38, 00:17:33.908 "snapshot": false, 00:17:33.908 "clone": false, 00:17:33.908 "esnap_clone": false 00:17:33.908 } 00:17:33.908 } 00:17:33.908 } 00:17:33.908 ] 00:17:33.908 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:33.908 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:33.908 16:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:34.165 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:34.165 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:34.165 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:34.423 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:34.423 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:34.680 [2024-05-15 16:38:41.746289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:34.680 16:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:34.938 request: 00:17:34.938 { 00:17:34.938 "uuid": "dc5538ae-ff66-4fc8-88c3-c47c4c9a3081", 00:17:34.938 "method": "bdev_lvol_get_lvstores", 00:17:34.938 "req_id": 1 00:17:34.938 } 00:17:34.938 Got JSON-RPC error response 00:17:34.938 response: 00:17:34.938 { 00:17:34.938 "code": -19, 00:17:34.938 "message": "No such device" 00:17:34.938 } 00:17:34.938 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:34.938 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.938 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.938 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.938 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:35.196 aio_bdev 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2aeada42-6070-4756-b73b-531964c94e28 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=2aeada42-6070-4756-b73b-531964c94e28 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:35.196 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:35.453 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2aeada42-6070-4756-b73b-531964c94e28 -t 2000 00:17:35.711 [ 00:17:35.711 { 00:17:35.711 "name": "2aeada42-6070-4756-b73b-531964c94e28", 00:17:35.711 "aliases": [ 00:17:35.711 "lvs/lvol" 00:17:35.711 ], 00:17:35.711 "product_name": "Logical Volume", 00:17:35.711 "block_size": 4096, 00:17:35.711 "num_blocks": 38912, 00:17:35.711 "uuid": "2aeada42-6070-4756-b73b-531964c94e28", 00:17:35.711 "assigned_rate_limits": { 00:17:35.711 "rw_ios_per_sec": 0, 00:17:35.711 "rw_mbytes_per_sec": 0, 00:17:35.711 "r_mbytes_per_sec": 0, 00:17:35.711 "w_mbytes_per_sec": 0 00:17:35.711 }, 00:17:35.711 "claimed": false, 00:17:35.711 "zoned": false, 00:17:35.711 "supported_io_types": { 00:17:35.711 "read": true, 00:17:35.711 "write": true, 00:17:35.711 "unmap": true, 00:17:35.711 "write_zeroes": true, 00:17:35.711 "flush": false, 00:17:35.711 "reset": true, 00:17:35.711 "compare": false, 00:17:35.711 "compare_and_write": false, 00:17:35.711 "abort": false, 00:17:35.711 "nvme_admin": false, 00:17:35.711 "nvme_io": false 00:17:35.711 }, 00:17:35.711 "driver_specific": { 00:17:35.711 "lvol": { 00:17:35.711 "lvol_store_uuid": "dc5538ae-ff66-4fc8-88c3-c47c4c9a3081", 00:17:35.711 "base_bdev": "aio_bdev", 00:17:35.711 "thin_provision": false, 00:17:35.711 "num_allocated_clusters": 38, 00:17:35.711 "snapshot": false, 00:17:35.711 "clone": false, 00:17:35.711 "esnap_clone": false 00:17:35.711 } 00:17:35.711 } 00:17:35.711 } 00:17:35.711 ] 00:17:35.711 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:35.711 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:35.711 16:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:35.969 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:35.969 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:35.969 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:36.226 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:36.226 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2aeada42-6070-4756-b73b-531964c94e28 00:17:36.483 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc5538ae-ff66-4fc8-88c3-c47c4c9a3081 00:17:36.741 16:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.998 16:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.255 00:17:37.255 real 0m19.353s 00:17:37.255 user 0m49.209s 00:17:37.255 sys 0m4.536s 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:37.255 ************************************ 00:17:37.255 END TEST lvs_grow_dirty 00:17:37.255 ************************************ 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:37.255 nvmf_trace.0 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.255 rmmod nvme_tcp 00:17:37.255 rmmod nvme_fabrics 00:17:37.255 rmmod nvme_keyring 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1758146 ']' 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1758146 00:17:37.255 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1758146 ']' 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1758146 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1758146 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1758146' 00:17:37.256 killing process with pid 1758146 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1758146 00:17:37.256 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1758146 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.513 16:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.040 16:38:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.040 00:17:40.040 real 0m43.004s 00:17:40.040 user 1m12.459s 00:17:40.040 sys 0m8.746s 00:17:40.040 16:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:40.040 16:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:40.040 ************************************ 00:17:40.040 END TEST nvmf_lvs_grow 00:17:40.040 ************************************ 00:17:40.040 16:38:46 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:40.040 16:38:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:40.040 16:38:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:40.040 16:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.040 ************************************ 00:17:40.040 START TEST nvmf_bdev_io_wait 00:17:40.040 ************************************ 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:40.040 * Looking for test storage... 00:17:40.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.040 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.041 16:38:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:42.576 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:42.576 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:42.576 Found net devices under 0000:09:00.0: cvl_0_0 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:42.576 Found net devices under 0000:09:00.1: cvl_0_1 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.576 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:17:42.577 00:17:42.577 --- 10.0.0.2 ping statistics --- 00:17:42.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.577 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:17:42.577 00:17:42.577 --- 10.0.0.1 ping statistics --- 00:17:42.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.577 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1760999 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1760999 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1760999 ']' 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.577 [2024-05-15 16:38:49.472119] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:42.577 [2024-05-15 16:38:49.472204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.577 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.577 [2024-05-15 16:38:49.546577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.577 [2024-05-15 16:38:49.633679] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.577 [2024-05-15 16:38:49.633734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.577 [2024-05-15 16:38:49.633748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.577 [2024-05-15 16:38:49.633760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.577 [2024-05-15 16:38:49.633769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.577 [2024-05-15 16:38:49.633835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.577 [2024-05-15 16:38:49.633891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.577 [2024-05-15 16:38:49.633956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.577 [2024-05-15 16:38:49.633958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.577 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.835 [2024-05-15 16:38:49.823114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.835 Malloc0 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.835 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.836 [2024-05-15 16:38:49.888623] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:42.836 [2024-05-15 16:38:49.888942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1761022 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1761023 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1761026 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.836 { 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme$subsystem", 00:17:42.836 "trtype": "$TEST_TRANSPORT", 00:17:42.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "$NVMF_PORT", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.836 "hdgst": ${hdgst:-false}, 00:17:42.836 "ddgst": ${ddgst:-false} 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 } 00:17:42.836 EOF 00:17:42.836 )") 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1761028 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.836 { 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme$subsystem", 00:17:42.836 "trtype": "$TEST_TRANSPORT", 00:17:42.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "$NVMF_PORT", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.836 "hdgst": ${hdgst:-false}, 00:17:42.836 "ddgst": ${ddgst:-false} 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 } 00:17:42.836 EOF 00:17:42.836 )") 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.836 { 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme$subsystem", 00:17:42.836 "trtype": "$TEST_TRANSPORT", 00:17:42.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "$NVMF_PORT", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.836 "hdgst": ${hdgst:-false}, 00:17:42.836 "ddgst": ${ddgst:-false} 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 } 00:17:42.836 EOF 00:17:42.836 )") 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.836 { 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme$subsystem", 00:17:42.836 "trtype": "$TEST_TRANSPORT", 00:17:42.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "$NVMF_PORT", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.836 "hdgst": ${hdgst:-false}, 00:17:42.836 "ddgst": ${ddgst:-false} 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 } 00:17:42.836 EOF 00:17:42.836 )") 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1761022 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme1", 00:17:42.836 "trtype": "tcp", 00:17:42.836 "traddr": "10.0.0.2", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "4420", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.836 "hdgst": false, 00:17:42.836 "ddgst": false 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 }' 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme1", 00:17:42.836 "trtype": "tcp", 00:17:42.836 "traddr": "10.0.0.2", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "4420", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.836 "hdgst": false, 00:17:42.836 "ddgst": false 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 }' 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme1", 00:17:42.836 "trtype": "tcp", 00:17:42.836 "traddr": "10.0.0.2", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "4420", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.836 "hdgst": false, 00:17:42.836 "ddgst": false 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 }' 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:42.836 16:38:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.836 "params": { 00:17:42.836 "name": "Nvme1", 00:17:42.836 "trtype": "tcp", 00:17:42.836 "traddr": "10.0.0.2", 00:17:42.836 "adrfam": "ipv4", 00:17:42.836 "trsvcid": "4420", 00:17:42.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.836 "hdgst": false, 00:17:42.836 "ddgst": false 00:17:42.836 }, 00:17:42.836 "method": "bdev_nvme_attach_controller" 00:17:42.836 }' 00:17:42.836 [2024-05-15 16:38:49.934787] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:42.836 [2024-05-15 16:38:49.934796] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:42.836 [2024-05-15 16:38:49.934797] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:42.836 [2024-05-15 16:38:49.934796] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:42.836 [2024-05-15 16:38:49.934868] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:42.836 [2024-05-15 16:38:49.934882] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 16:38:49.934883] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 16:38:49.934883] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:42.836 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:42.836 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:42.836 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.094 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.094 [2024-05-15 16:38:50.131353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.094 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.094 [2024-05-15 16:38:50.210210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:43.094 [2024-05-15 16:38:50.237264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.094 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.094 [2024-05-15 16:38:50.315466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:43.352 [2024-05-15 16:38:50.339339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.352 [2024-05-15 16:38:50.414342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.352 [2024-05-15 16:38:50.419242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:43.352 [2024-05-15 16:38:50.482705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:43.352 Running I/O for 1 seconds... 00:17:43.352 Running I/O for 1 seconds... 00:17:43.610 Running I/O for 1 seconds... 00:17:43.610 Running I/O for 1 seconds... 00:17:44.542 00:17:44.542 Latency(us) 00:17:44.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.542 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:44.542 Nvme1n1 : 1.00 187082.08 730.79 0.00 0.00 681.53 268.52 885.95 00:17:44.542 =================================================================================================================== 00:17:44.542 Total : 187082.08 730.79 0.00 0.00 681.53 268.52 885.95 00:17:44.542 00:17:44.542 Latency(us) 00:17:44.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.542 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:44.542 Nvme1n1 : 1.02 6438.98 25.15 0.00 0.00 19730.23 9320.68 32816.55 00:17:44.542 =================================================================================================================== 00:17:44.542 Total : 6438.98 25.15 0.00 0.00 19730.23 9320.68 32816.55 00:17:44.542 00:17:44.542 Latency(us) 00:17:44.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.543 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:44.543 Nvme1n1 : 1.01 6276.72 24.52 0.00 0.00 20317.02 6505.05 44079.03 00:17:44.543 =================================================================================================================== 00:17:44.543 Total : 6276.72 24.52 0.00 0.00 20317.02 6505.05 44079.03 00:17:44.543 00:17:44.543 Latency(us) 00:17:44.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.543 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:44.543 Nvme1n1 : 1.01 9284.05 36.27 0.00 0.00 13733.78 6602.15 25631.86 00:17:44.543 =================================================================================================================== 00:17:44.543 Total : 9284.05 36.27 0.00 0.00 13733.78 6602.15 25631.86 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1761023 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1761026 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1761028 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.108 rmmod nvme_tcp 00:17:45.108 rmmod nvme_fabrics 00:17:45.108 rmmod nvme_keyring 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1760999 ']' 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1760999 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1760999 ']' 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1760999 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1760999 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1760999' 00:17:45.108 killing process with pid 1760999 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1760999 00:17:45.108 [2024-05-15 16:38:52.136537] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:45.108 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1760999 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.374 16:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.274 16:38:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.274 00:17:47.274 real 0m7.687s 00:17:47.274 user 0m16.802s 00:17:47.274 sys 0m3.809s 00:17:47.274 16:38:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:47.274 16:38:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.274 ************************************ 00:17:47.274 END TEST nvmf_bdev_io_wait 00:17:47.274 ************************************ 00:17:47.274 16:38:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:47.274 16:38:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:47.274 16:38:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:47.274 16:38:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.274 ************************************ 00:17:47.274 START TEST nvmf_queue_depth 00:17:47.274 ************************************ 00:17:47.274 16:38:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:47.274 * Looking for test storage... 00:17:47.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.274 16:38:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.532 16:38:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:50.079 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:50.080 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:50.080 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:50.080 Found net devices under 0000:09:00.0: cvl_0_0 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:50.080 Found net devices under 0000:09:00.1: cvl_0_1 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.080 16:38:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:17:50.080 00:17:50.080 --- 10.0.0.2 ping statistics --- 00:17:50.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.080 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:17:50.080 00:17:50.080 --- 10.0.0.1 ping statistics --- 00:17:50.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.080 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1763660 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1763660 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1763660 ']' 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.080 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.080 [2024-05-15 16:38:57.176739] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:50.080 [2024-05-15 16:38:57.176836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.080 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.080 [2024-05-15 16:38:57.250685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.338 [2024-05-15 16:38:57.335544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.338 [2024-05-15 16:38:57.335606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.338 [2024-05-15 16:38:57.335636] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.338 [2024-05-15 16:38:57.335648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.338 [2024-05-15 16:38:57.335658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.338 [2024-05-15 16:38:57.335688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.338 [2024-05-15 16:38:57.481746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.338 Malloc0 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.338 [2024-05-15 16:38:57.543340] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:50.338 [2024-05-15 16:38:57.543629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1763684 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1763684 /var/tmp/bdevperf.sock 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1763684 ']' 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.338 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.596 [2024-05-15 16:38:57.586905] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:17:50.596 [2024-05-15 16:38:57.586980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763684 ] 00:17:50.596 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.596 [2024-05-15 16:38:57.656882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.596 [2024-05-15 16:38:57.743266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.853 NVMe0n1 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.853 16:38:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.853 Running I/O for 10 seconds... 00:18:03.047 00:18:03.047 Latency(us) 00:18:03.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.047 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:03.047 Verification LBA range: start 0x0 length 0x4000 00:18:03.047 NVMe0n1 : 10.10 8507.64 33.23 0.00 0.00 119838.03 25437.68 78449.02 00:18:03.047 =================================================================================================================== 00:18:03.047 Total : 8507.64 33.23 0.00 0.00 119838.03 25437.68 78449.02 00:18:03.047 0 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1763684 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1763684 ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1763684 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1763684 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1763684' 00:18:03.047 killing process with pid 1763684 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1763684 00:18:03.047 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.047 00:18:03.047 Latency(us) 00:18:03.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.047 =================================================================================================================== 00:18:03.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1763684 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.047 rmmod nvme_tcp 00:18:03.047 rmmod nvme_fabrics 00:18:03.047 rmmod nvme_keyring 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1763660 ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1763660 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1763660 ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1763660 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1763660 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1763660' 00:18:03.047 killing process with pid 1763660 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1763660 00:18:03.047 [2024-05-15 16:39:08.484101] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1763660 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.047 16:39:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.614 16:39:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.614 00:18:03.614 real 0m16.330s 00:18:03.614 user 0m22.494s 00:18:03.614 sys 0m3.265s 00:18:03.614 16:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:03.614 16:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:03.614 ************************************ 00:18:03.614 END TEST nvmf_queue_depth 00:18:03.614 ************************************ 00:18:03.614 16:39:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:03.614 16:39:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:03.614 16:39:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:03.614 16:39:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.614 ************************************ 00:18:03.614 START TEST nvmf_target_multipath 00:18:03.614 ************************************ 00:18:03.614 16:39:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:03.872 * Looking for test storage... 00:18:03.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.872 16:39:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.873 16:39:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:06.402 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:06.402 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:06.402 Found net devices under 0000:09:00.0: cvl_0_0 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:06.402 Found net devices under 0000:09:00.1: cvl_0_1 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.402 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:18:06.403 00:18:06.403 --- 10.0.0.2 ping statistics --- 00:18:06.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.403 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:18:06.403 00:18:06.403 --- 10.0.0.1 ping statistics --- 00:18:06.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.403 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:06.403 only one NIC for nvmf test 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.403 rmmod nvme_tcp 00:18:06.403 rmmod nvme_fabrics 00:18:06.403 rmmod nvme_keyring 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.403 16:39:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.933 00:18:08.933 real 0m4.722s 00:18:08.933 user 0m0.941s 00:18:08.933 sys 0m1.773s 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.933 16:39:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:08.933 ************************************ 00:18:08.933 END TEST nvmf_target_multipath 00:18:08.933 ************************************ 00:18:08.933 16:39:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:08.933 16:39:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:08.933 16:39:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.933 16:39:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.933 ************************************ 00:18:08.933 START TEST nvmf_zcopy 00:18:08.933 ************************************ 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:08.933 * Looking for test storage... 00:18:08.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.933 16:39:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.934 16:39:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:11.461 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:11.461 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:11.461 Found net devices under 0000:09:00.0: cvl_0_0 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:11.461 Found net devices under 0000:09:00.1: cvl_0_1 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.461 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:18:11.462 00:18:11.462 --- 10.0.0.2 ping statistics --- 00:18:11.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.462 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:18:11.462 00:18:11.462 --- 10.0.0.1 ping statistics --- 00:18:11.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.462 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1770054 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1770054 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1770054 ']' 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 [2024-05-15 16:39:18.303710] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:18:11.462 [2024-05-15 16:39:18.303786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.462 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.462 [2024-05-15 16:39:18.383916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.462 [2024-05-15 16:39:18.475931] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.462 [2024-05-15 16:39:18.475989] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.462 [2024-05-15 16:39:18.476017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.462 [2024-05-15 16:39:18.476031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.462 [2024-05-15 16:39:18.476044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.462 [2024-05-15 16:39:18.476092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 [2024-05-15 16:39:18.627260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 [2024-05-15 16:39:18.643227] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:11.462 [2024-05-15 16:39:18.643561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 malloc0 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.462 { 00:18:11.462 "params": { 00:18:11.462 "name": "Nvme$subsystem", 00:18:11.462 "trtype": "$TEST_TRANSPORT", 00:18:11.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.462 "adrfam": "ipv4", 00:18:11.462 "trsvcid": "$NVMF_PORT", 00:18:11.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.462 "hdgst": ${hdgst:-false}, 00:18:11.462 "ddgst": ${ddgst:-false} 00:18:11.462 }, 00:18:11.462 "method": "bdev_nvme_attach_controller" 00:18:11.462 } 00:18:11.462 EOF 00:18:11.462 )") 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:11.462 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:11.720 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:11.720 16:39:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.720 "params": { 00:18:11.720 "name": "Nvme1", 00:18:11.720 "trtype": "tcp", 00:18:11.720 "traddr": "10.0.0.2", 00:18:11.720 "adrfam": "ipv4", 00:18:11.720 "trsvcid": "4420", 00:18:11.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.720 "hdgst": false, 00:18:11.720 "ddgst": false 00:18:11.720 }, 00:18:11.720 "method": "bdev_nvme_attach_controller" 00:18:11.720 }' 00:18:11.720 [2024-05-15 16:39:18.723650] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:18:11.720 [2024-05-15 16:39:18.723732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770195 ] 00:18:11.720 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.720 [2024-05-15 16:39:18.796252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.720 [2024-05-15 16:39:18.890743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.978 Running I/O for 10 seconds... 00:18:21.968 00:18:21.968 Latency(us) 00:18:21.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.968 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:21.968 Verification LBA range: start 0x0 length 0x1000 00:18:21.968 Nvme1n1 : 10.01 5794.85 45.27 0.00 0.00 22026.61 788.86 32816.55 00:18:21.968 =================================================================================================================== 00:18:21.968 Total : 5794.85 45.27 0.00 0.00 22026.61 788.86 32816.55 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1771383 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.226 { 00:18:22.226 "params": { 00:18:22.226 "name": "Nvme$subsystem", 00:18:22.226 "trtype": "$TEST_TRANSPORT", 00:18:22.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.226 "adrfam": "ipv4", 00:18:22.226 "trsvcid": "$NVMF_PORT", 00:18:22.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.226 "hdgst": ${hdgst:-false}, 00:18:22.226 "ddgst": ${ddgst:-false} 00:18:22.226 }, 00:18:22.226 "method": "bdev_nvme_attach_controller" 00:18:22.226 } 00:18:22.226 EOF 00:18:22.226 )") 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:22.226 [2024-05-15 16:39:29.349996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.350038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:22.226 16:39:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.226 "params": { 00:18:22.226 "name": "Nvme1", 00:18:22.226 "trtype": "tcp", 00:18:22.226 "traddr": "10.0.0.2", 00:18:22.226 "adrfam": "ipv4", 00:18:22.226 "trsvcid": "4420", 00:18:22.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.226 "hdgst": false, 00:18:22.226 "ddgst": false 00:18:22.226 }, 00:18:22.226 "method": "bdev_nvme_attach_controller" 00:18:22.226 }' 00:18:22.226 [2024-05-15 16:39:29.357942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.357965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.365964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.365986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.373986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.374007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.382006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.382027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.388322] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:18:22.226 [2024-05-15 16:39:29.388406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771383 ] 00:18:22.226 [2024-05-15 16:39:29.390027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.390048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.398047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.398068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.406070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.406091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.414091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.414117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.226 [2024-05-15 16:39:29.422112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.422133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.430133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.430154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.438155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.438177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.226 [2024-05-15 16:39:29.446179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.226 [2024-05-15 16:39:29.446225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.454224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.454248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.457018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.483 [2024-05-15 16:39:29.462270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.462301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.470343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.470383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.478309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.478335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.486322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.486347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.494330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.494355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.502368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.483 [2024-05-15 16:39:29.502392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.483 [2024-05-15 16:39:29.510414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.510452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.518426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.518460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.526416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.526440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.534438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.534461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.542458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.542481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.546425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.484 [2024-05-15 16:39:29.550477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.550514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.558516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.558545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.566599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.566640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.574602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.574642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.582629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.582686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.590661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.590704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.598683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.598725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.606697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.606739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.614683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.614708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.622733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.622771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.630760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.630799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.638770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.638808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.646759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.646780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.654781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.654802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.662820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.662846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.670831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.670856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.678849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.678873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.686872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.686896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.694894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.694917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.484 [2024-05-15 16:39:29.702926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.484 [2024-05-15 16:39:29.702950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.741 [2024-05-15 16:39:29.710949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.741 [2024-05-15 16:39:29.710983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.741 [2024-05-15 16:39:29.719113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.741 [2024-05-15 16:39:29.719141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.741 [2024-05-15 16:39:29.726991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.741 [2024-05-15 16:39:29.727014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.741 Running I/O for 5 seconds... 00:18:22.741 [2024-05-15 16:39:29.735012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.735034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.749835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.749865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.761208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.761247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.771892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.771920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.783064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.783093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.794231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.794259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.805225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.805253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.818163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.818191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.829790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.829823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.841192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.841240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.853065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.853097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.864939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.864972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.878280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.878309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.889429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.889457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.901043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.901074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.912594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.912626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.924112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.924144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.935721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.935752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.947293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.947324] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.742 [2024-05-15 16:39:29.959047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.742 [2024-05-15 16:39:29.959078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:29.970731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:29.970763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:29.984115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:29.984147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:29.994983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:29.995015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.006369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.006398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.018313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.018343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.030359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.030389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.041335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.041365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.052468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.052496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.063851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.063884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.075691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.075723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.087206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.087276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.099574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.099616] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.110843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.110874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.122708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.122740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.134085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.134116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.147619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.147651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.158405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.158434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.170083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.170114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.181516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.181543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.193611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.193643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.205162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.205194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.000 [2024-05-15 16:39:30.216684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.000 [2024-05-15 16:39:30.216715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.257 [2024-05-15 16:39:30.228589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.228620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.240053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.240084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.251620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.251652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.263031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.263062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.274860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.274892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.286442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.286470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.297783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.297813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.309784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.309815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.321418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.321446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.333291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.333319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.345464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.345507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.356936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.356968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.368998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.369030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.381582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.381613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.393737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.393768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.405767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.405798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.417480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.417526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.429852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.429883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.441387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.441415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.453198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.453238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.465298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.465327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.258 [2024-05-15 16:39:30.476928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.258 [2024-05-15 16:39:30.476959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.488817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.488850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.500905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.500937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.512319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.512354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.523886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.523918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.535830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.535861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.547892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.547924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.559676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.559707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.573333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.573363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.584573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.584614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.596471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.596517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.608333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.608362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.619929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.619960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.631639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.631670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.643530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.643557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.655413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.655441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.666974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.667005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.678544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.678587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.690433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.690462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.702648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.702680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.714427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.714456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.726399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.726428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.516 [2024-05-15 16:39:30.738038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.516 [2024-05-15 16:39:30.738070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.749984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.750016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.762168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.762199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.773602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.773634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.785444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.785490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.797227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.797258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.808562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.808616] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.820762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.820793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.831710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.831738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.843930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.843958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.854091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.854119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.864818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.864845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.877479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.877508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.887794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.887821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.898664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.898691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.911529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.911556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.921486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.921526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.932023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.932051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.944383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.944411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.954197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.954234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.964867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.964894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.975531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.975573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.986652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.986680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.774 [2024-05-15 16:39:30.999308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.774 [2024-05-15 16:39:30.999336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.009846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.009874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.020680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.020715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.033597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.033625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.043959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.043987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.054600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.054627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.065053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.065081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.075709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.075739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.032 [2024-05-15 16:39:31.088602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.032 [2024-05-15 16:39:31.088629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.100434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.100463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.109981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.110010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.121537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.121566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.132537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.132564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.143283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.143311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.155847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.155874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.165516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.165544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.176434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.176462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.187096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.187123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.198234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.198262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.209008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.209035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.220097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.220125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.230882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.230919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.241717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.241744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.033 [2024-05-15 16:39:31.252984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.033 [2024-05-15 16:39:31.253011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.290 [2024-05-15 16:39:31.263880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.290 [2024-05-15 16:39:31.263907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.290 [2024-05-15 16:39:31.276880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.290 [2024-05-15 16:39:31.276907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.290 [2024-05-15 16:39:31.286879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.290 [2024-05-15 16:39:31.286906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.297573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.297601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.307944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.307970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.318451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.318480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.329454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.329483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.340260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.340288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.353081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.353108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.362968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.362995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.373764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.373792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.384733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.384761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.396051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.396078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.406341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.406370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.417164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.417206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.429453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.429481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.439312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.439359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.450911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.450939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.461711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.461739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.472299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.472328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.483109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.483136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.493546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.493573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.505179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.505212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.291 [2024-05-15 16:39:31.516984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.291 [2024-05-15 16:39:31.517017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.528713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.528744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.541124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.541154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.553344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.553373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.564930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.564961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.576836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.576867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.588795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.588827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.600794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.600825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.612382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.612410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.624049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.624080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.636090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.636121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.648443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.648472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.660145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.660176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.672633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.672664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.684315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.684343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.696333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.696361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.708332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.708364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.719828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.719859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.731691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.731722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.743744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.743775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.755486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.755528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.549 [2024-05-15 16:39:31.768803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.549 [2024-05-15 16:39:31.768834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.779900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.779931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.792012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.792044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.803845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.803877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.817865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.817896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.829226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.829275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.841013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.841044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.852600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.852631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.864415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.864444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.875480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.875523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.887153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.887181] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.898693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.898724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.911916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.911948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.922053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.922084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.806 [2024-05-15 16:39:31.933731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.806 [2024-05-15 16:39:31.933763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:31.945225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:31.945271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:31.957293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:31.957322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:31.968813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:31.968843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:31.980354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:31.980383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:31.991790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:31.991821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:32.003185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:32.003225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:32.015207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:32.015252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.807 [2024-05-15 16:39:32.026493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.807 [2024-05-15 16:39:32.026538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.037731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.037763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.049227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.049285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.061417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.061445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.073159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.073190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.085168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.085199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.097112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.097143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.108399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.108428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.119779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.119810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.131038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.131069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.142838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.142870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.154373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.154402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.166033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.166063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.177498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.177526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.189208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.189251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.201131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.201162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.212989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.213020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.224774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.224806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.236373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.236400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.247604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.247635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.259142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.259174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.270714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.270745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.065 [2024-05-15 16:39:32.282727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.065 [2024-05-15 16:39:32.282759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.294336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.294364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.306020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.306051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.319479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.319532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.330289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.330317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.341776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.341807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.355208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.355262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.366032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.366063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.377795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.377827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.389074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.389104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.400421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.400449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.412237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.412282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.424187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.424226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.435880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.435910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.447752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.447783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.459116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.459146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.470810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.470841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.484025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-05-15 16:39:32.484056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-05-15 16:39:32.495395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.324 [2024-05-15 16:39:32.495423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.324 [2024-05-15 16:39:32.506875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.324 [2024-05-15 16:39:32.506909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.324 [2024-05-15 16:39:32.518935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.324 [2024-05-15 16:39:32.518966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.324 [2024-05-15 16:39:32.530664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.324 [2024-05-15 16:39:32.530694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.324 [2024-05-15 16:39:32.542311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.324 [2024-05-15 16:39:32.542347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.553562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.553593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.565128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.565158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.576847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.576877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.587956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.587988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.599003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.599033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.610333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.610360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.621857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.621886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.632987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.633017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.644379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.581 [2024-05-15 16:39:32.644406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.581 [2024-05-15 16:39:32.656145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.656175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.667869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.667899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.679421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.679449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.693254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.693297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.704233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.704276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.715544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.715574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.727251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.727293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.738731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.738761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.750231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.750275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.761599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.761643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.773329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.773356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.784484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.784527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.795880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.795910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-05-15 16:39:32.807136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-05-15 16:39:32.807166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.818557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.818602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.831521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.831547] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.842643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.842673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.854358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.854384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.865546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.865577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.876765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.876796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.887711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.887741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.898849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.898879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.910132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.910161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.921921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.921951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.933547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.933589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.945181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.945210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.956847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.956877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.968096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.968126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.979459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.979493] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:32.990972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:32.991001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:33.002727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:33.002756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:33.014750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:33.014780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:33.026516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:33.026542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:33.037996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:33.038026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:33.049184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:33.049214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.840 [2024-05-15 16:39:33.060675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.840 [2024-05-15 16:39:33.060705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.097 [2024-05-15 16:39:33.071955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.097 [2024-05-15 16:39:33.071985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.097 [2024-05-15 16:39:33.083473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.083500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.094685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.094715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.106076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.106107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.117620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.117650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.129082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.129112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.142152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.142182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.153239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.153282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.164339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.164367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.177194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.177233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.187815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.187845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.200007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.200038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.211397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.211425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.224760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.224790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.235531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.235561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.247749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.247779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.259381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.259408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.270725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.270756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.282429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.282457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.293719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.293750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.305152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.305183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.098 [2024-05-15 16:39:33.316513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.098 [2024-05-15 16:39:33.316540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.328577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.328607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.340125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.340155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.351887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.351919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.364066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.364097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.375994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.376025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.387730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.387761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.399536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.399562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.411783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.411814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.423146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.423175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.435330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.435358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.446885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.446915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.458627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.458657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.470380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.470407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.482007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.482037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.493815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.493845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.505577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.505621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.517396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.517423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.529434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.529462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.541424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.541451] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.553411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.553438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.565463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.565490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.356 [2024-05-15 16:39:33.577328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.356 [2024-05-15 16:39:33.577355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.588610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.588640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.600133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.600163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.611576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.611606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.623144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.623173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.634371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.634398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.646151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.646181] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.657855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.657885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.671309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.671336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.682170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.682200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.693801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.693830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.705676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.705706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.717518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.717544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.729311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.729338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.740933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.740963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.754652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.754682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.765820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.765850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.777310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.777337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.788523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.788554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.800040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.800069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.811036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.811066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.822622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.822652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.614 [2024-05-15 16:39:33.833992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.614 [2024-05-15 16:39:33.834022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.845404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.845431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.857322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.857349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.868644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.868673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.880358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.880386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.891805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.891835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.903096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.903126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.914932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.914962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.926557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.926587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.938055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.938085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.949504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.949531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.961038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.961068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.972744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.972774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.984656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.984687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:33.995920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:33.995949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.007338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.007365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.018873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.018903] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.030602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.030633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.042772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.042803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.054610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.054640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.066678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.066708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.078540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.872 [2024-05-15 16:39:34.078581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.872 [2024-05-15 16:39:34.090374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.873 [2024-05-15 16:39:34.090401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.104089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.104119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.115124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.115154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.126505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.126532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.137762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.137792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.149148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.149178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.160546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.160577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.172279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.172307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.130 [2024-05-15 16:39:34.184116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.130 [2024-05-15 16:39:34.184147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.195869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.195899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.207554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.207584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.220624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.220654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.231071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.231100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.243035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.243063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.253587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.253613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.264012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.264038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.274746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.274772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.285305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.285332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.299045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.299082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.309423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.309450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.319629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.319655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.330005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.330031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.340755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.340783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.131 [2024-05-15 16:39:34.350725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.131 [2024-05-15 16:39:34.350752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.361292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.361319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.372102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.372129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.382920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.382947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.395641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.395667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.406157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.406185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.420696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.420725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.430962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.430989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.442066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.442093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.453166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.453193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.464094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.464122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.474970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.474996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.485675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.485701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.498069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.498096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.508180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.508239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.518810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.518836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.529670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.529697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.540647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.540675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.552558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.552599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.563618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.563644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.574706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.574733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.587011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.587038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.597321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.597349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.389 [2024-05-15 16:39:34.608587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.389 [2024-05-15 16:39:34.608613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.621346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.621373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.631348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.631375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.642183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.642232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.654563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.654590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.664719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.664746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.675257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.675284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.686055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.686081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.698284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.698311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.707840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.707867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.719237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.719275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.731768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.731794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.743636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.743678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.752053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.752079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 00:18:27.648 Latency(us) 00:18:27.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.648 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:27.648 Nvme1n1 : 5.01 11132.97 86.98 0.00 0.00 11482.93 4757.43 25049.32 00:18:27.648 =================================================================================================================== 00:18:27.648 Total : 11132.97 86.98 0.00 0.00 11482.93 4757.43 25049.32 00:18:27.648 [2024-05-15 16:39:34.758825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.758848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.766843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.766867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.774933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.774981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.782962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.783013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.790989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.791041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.799000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.799049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.807014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.807061] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.815049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.815101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.823070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.823119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.831085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.831137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.839117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.839167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.847134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.847186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.855161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.855209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.863183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.863241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.648 [2024-05-15 16:39:34.871195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.648 [2024-05-15 16:39:34.871263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.879223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.879277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.887241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.887290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.895247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.895299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.903241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.903282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.911303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.911352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.919336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.919385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.927356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.927405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.935336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.935362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.943366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.943402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.951420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.951471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.959448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.959498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.967397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.967418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.975417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.975438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 [2024-05-15 16:39:34.983438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.906 [2024-05-15 16:39:34.983459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1771383) - No such process 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1771383 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.906 16:39:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.906 delay0 00:18:27.906 16:39:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.906 16:39:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:27.906 16:39:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.906 16:39:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.907 16:39:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.907 16:39:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:27.907 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.907 [2024-05-15 16:39:35.102170] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:36.025 Initializing NVMe Controllers 00:18:36.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.025 Initialization complete. Launching workers. 00:18:36.025 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 9851 00:18:36.025 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10039, failed to submit 79 00:18:36.025 success 9927, unsuccess 112, failed 0 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.025 rmmod nvme_tcp 00:18:36.025 rmmod nvme_fabrics 00:18:36.025 rmmod nvme_keyring 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1770054 ']' 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1770054 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1770054 ']' 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1770054 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1770054 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1770054' 00:18:36.025 killing process with pid 1770054 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1770054 00:18:36.025 [2024-05-15 16:39:41.918634] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:36.025 16:39:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1770054 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.025 16:39:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.403 16:39:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.403 00:18:37.403 real 0m28.607s 00:18:37.403 user 0m40.535s 00:18:37.403 sys 0m9.545s 00:18:37.403 16:39:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.403 16:39:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.403 ************************************ 00:18:37.403 END TEST nvmf_zcopy 00:18:37.403 ************************************ 00:18:37.403 16:39:44 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.403 16:39:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:37.403 16:39:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:37.403 16:39:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.403 ************************************ 00:18:37.403 START TEST nvmf_nmic 00:18:37.403 ************************************ 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.403 * Looking for test storage... 00:18:37.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.403 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.404 16:39:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:39.940 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:39.940 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:39.940 Found net devices under 0000:09:00.0: cvl_0_0 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:39.940 Found net devices under 0000:09:00.1: cvl_0_1 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:18:39.940 00:18:39.940 --- 10.0.0.2 ping statistics --- 00:18:39.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.940 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:18:39.940 00:18:39.940 --- 10.0.0.1 ping statistics --- 00:18:39.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.940 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1775058 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1775058 00:18:39.940 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1775058 ']' 00:18:39.941 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.941 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.941 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.941 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.941 16:39:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.941 [2024-05-15 16:39:46.957898] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:18:39.941 [2024-05-15 16:39:46.957984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.941 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.941 [2024-05-15 16:39:47.032675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.941 [2024-05-15 16:39:47.118407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.941 [2024-05-15 16:39:47.118464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.941 [2024-05-15 16:39:47.118492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.941 [2024-05-15 16:39:47.118504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.941 [2024-05-15 16:39:47.118514] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.941 [2024-05-15 16:39:47.118566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.941 [2024-05-15 16:39:47.118623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.941 [2024-05-15 16:39:47.118941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.941 [2024-05-15 16:39:47.118945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 [2024-05-15 16:39:47.279944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 Malloc0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 [2024-05-15 16:39:47.333383] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:40.199 [2024-05-15 16:39:47.333711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:40.199 test case1: single bdev can't be used in multiple subsystems 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.199 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 [2024-05-15 16:39:47.357528] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:40.199 [2024-05-15 16:39:47.357572] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:40.199 [2024-05-15 16:39:47.357588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.199 request: 00:18:40.199 { 00:18:40.199 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:40.199 "namespace": { 00:18:40.199 "bdev_name": "Malloc0", 00:18:40.199 "no_auto_visible": false 00:18:40.199 }, 00:18:40.199 "method": "nvmf_subsystem_add_ns", 00:18:40.199 "req_id": 1 00:18:40.200 } 00:18:40.200 Got JSON-RPC error response 00:18:40.200 response: 00:18:40.200 { 00:18:40.200 "code": -32602, 00:18:40.200 "message": "Invalid parameters" 00:18:40.200 } 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:40.200 Adding namespace failed - expected result. 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:40.200 test case2: host connect to nvmf target in multiple paths 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.200 [2024-05-15 16:39:47.365661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.200 16:39:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.132 16:39:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:41.733 16:39:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:41.733 16:39:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:41.733 16:39:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.733 16:39:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:41.733 16:39:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:43.629 16:39:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:43.629 [global] 00:18:43.629 thread=1 00:18:43.629 invalidate=1 00:18:43.629 rw=write 00:18:43.629 time_based=1 00:18:43.629 runtime=1 00:18:43.629 ioengine=libaio 00:18:43.629 direct=1 00:18:43.629 bs=4096 00:18:43.629 iodepth=1 00:18:43.629 norandommap=0 00:18:43.629 numjobs=1 00:18:43.629 00:18:43.629 verify_dump=1 00:18:43.629 verify_backlog=512 00:18:43.629 verify_state_save=0 00:18:43.629 do_verify=1 00:18:43.629 verify=crc32c-intel 00:18:43.629 [job0] 00:18:43.629 filename=/dev/nvme0n1 00:18:43.629 Could not set queue depth (nvme0n1) 00:18:43.887 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.887 fio-3.35 00:18:43.887 Starting 1 thread 00:18:45.258 00:18:45.258 job0: (groupid=0, jobs=1): err= 0: pid=1775695: Wed May 15 16:39:52 2024 00:18:45.258 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:18:45.258 slat (nsec): min=9446, max=34792, avg=27139.68, stdev=9353.53 00:18:45.258 clat (usec): min=40858, max=41196, avg=40971.20, stdev=72.91 00:18:45.258 lat (usec): min=40892, max=41206, avg=40998.34, stdev=66.71 00:18:45.258 clat percentiles (usec): 00:18:45.258 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:45.258 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:45.258 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:45.258 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:45.258 | 99.99th=[41157] 00:18:45.258 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:45.258 slat (nsec): min=8847, max=39727, avg=18468.20, stdev=6766.91 00:18:45.258 clat (usec): min=191, max=372, avg=230.48, stdev=17.21 00:18:45.258 lat (usec): min=204, max=408, avg=248.95, stdev=20.73 00:18:45.258 clat percentiles (usec): 00:18:45.258 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:18:45.258 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 233], 00:18:45.258 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 251], 00:18:45.258 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 371], 00:18:45.258 | 99.99th=[ 371] 00:18:45.258 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:45.258 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:45.258 lat (usec) : 250=90.64%, 500=5.24% 00:18:45.258 lat (msec) : 50=4.12% 00:18:45.258 cpu : usr=0.68%, sys=1.16%, ctx=534, majf=0, minf=2 00:18:45.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.258 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.258 00:18:45.258 Run status group 0 (all jobs): 00:18:45.258 READ: bw=85.3KiB/s (87.3kB/s), 85.3KiB/s-85.3KiB/s (87.3kB/s-87.3kB/s), io=88.0KiB (90.1kB), run=1032-1032msec 00:18:45.258 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:18:45.258 00:18:45.258 Disk stats (read/write): 00:18:45.258 nvme0n1: ios=68/512, merge=0/0, ticks=768/119, in_queue=887, util=91.88% 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.258 rmmod nvme_tcp 00:18:45.258 rmmod nvme_fabrics 00:18:45.258 rmmod nvme_keyring 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1775058 ']' 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1775058 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1775058 ']' 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1775058 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1775058 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1775058' 00:18:45.258 killing process with pid 1775058 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1775058 00:18:45.258 [2024-05-15 16:39:52.276666] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:45.258 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1775058 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.517 16:39:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.427 16:39:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:47.427 00:18:47.427 real 0m10.308s 00:18:47.427 user 0m22.627s 00:18:47.427 sys 0m2.514s 00:18:47.427 16:39:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:47.427 16:39:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:47.427 ************************************ 00:18:47.427 END TEST nvmf_nmic 00:18:47.427 ************************************ 00:18:47.427 16:39:54 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:47.427 16:39:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:47.427 16:39:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:47.427 16:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.427 ************************************ 00:18:47.427 START TEST nvmf_fio_target 00:18:47.427 ************************************ 00:18:47.427 16:39:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:47.685 * Looking for test storage... 00:18:47.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.685 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.686 16:39:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:50.219 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:50.219 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.219 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:50.220 Found net devices under 0000:09:00.0: cvl_0_0 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:50.220 Found net devices under 0000:09:00.1: cvl_0_1 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:18:50.220 00:18:50.220 --- 10.0.0.2 ping statistics --- 00:18:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.220 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:50.220 00:18:50.220 --- 10.0.0.1 ping statistics --- 00:18:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.220 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1778172 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1778172 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1778172 ']' 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:50.220 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.220 [2024-05-15 16:39:57.433401] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:18:50.220 [2024-05-15 16:39:57.433491] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.478 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.478 [2024-05-15 16:39:57.511604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:50.478 [2024-05-15 16:39:57.597263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.478 [2024-05-15 16:39:57.597319] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.478 [2024-05-15 16:39:57.597348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.478 [2024-05-15 16:39:57.597361] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.478 [2024-05-15 16:39:57.597370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.478 [2024-05-15 16:39:57.597428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.478 [2024-05-15 16:39:57.597495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.478 [2024-05-15 16:39:57.597562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:50.478 [2024-05-15 16:39:57.597564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.735 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:50.735 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:50.735 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.735 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.736 16:39:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.736 16:39:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.736 16:39:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:50.993 [2024-05-15 16:39:58.023989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.993 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.251 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:51.251 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.509 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:51.509 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.767 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:51.767 16:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.024 16:39:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:52.024 16:39:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:52.282 16:39:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.539 16:39:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:52.539 16:39:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.797 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:52.797 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.055 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:53.055 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:53.313 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:53.570 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:53.570 16:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.827 16:40:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:53.827 16:40:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:54.085 16:40:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.342 [2024-05-15 16:40:01.503739] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:54.342 [2024-05-15 16:40:01.504049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.342 16:40:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:54.597 16:40:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:54.853 16:40:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:55.416 16:40:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:55.416 16:40:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:55.416 16:40:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.416 16:40:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:55.416 16:40:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:55.416 16:40:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:57.942 16:40:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:57.942 [global] 00:18:57.942 thread=1 00:18:57.942 invalidate=1 00:18:57.942 rw=write 00:18:57.942 time_based=1 00:18:57.942 runtime=1 00:18:57.942 ioengine=libaio 00:18:57.942 direct=1 00:18:57.942 bs=4096 00:18:57.942 iodepth=1 00:18:57.942 norandommap=0 00:18:57.942 numjobs=1 00:18:57.942 00:18:57.942 verify_dump=1 00:18:57.942 verify_backlog=512 00:18:57.942 verify_state_save=0 00:18:57.942 do_verify=1 00:18:57.942 verify=crc32c-intel 00:18:57.942 [job0] 00:18:57.942 filename=/dev/nvme0n1 00:18:57.942 [job1] 00:18:57.942 filename=/dev/nvme0n2 00:18:57.942 [job2] 00:18:57.942 filename=/dev/nvme0n3 00:18:57.942 [job3] 00:18:57.942 filename=/dev/nvme0n4 00:18:57.942 Could not set queue depth (nvme0n1) 00:18:57.942 Could not set queue depth (nvme0n2) 00:18:57.942 Could not set queue depth (nvme0n3) 00:18:57.942 Could not set queue depth (nvme0n4) 00:18:57.942 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.942 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.942 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.942 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.942 fio-3.35 00:18:57.942 Starting 4 threads 00:18:58.873 00:18:58.873 job0: (groupid=0, jobs=1): err= 0: pid=1779121: Wed May 15 16:40:06 2024 00:18:58.873 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:18:58.873 slat (nsec): min=7904, max=46711, avg=19958.59, stdev=9657.49 00:18:58.873 clat (usec): min=40887, max=41055, avg=40973.98, stdev=44.18 00:18:58.873 lat (usec): min=40919, max=41062, avg=40993.93, stdev=39.44 00:18:58.873 clat percentiles (usec): 00:18:58.873 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:58.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:58.873 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:58.873 | 99.99th=[41157] 00:18:58.873 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:58.873 slat (nsec): min=6674, max=65966, avg=15114.52, stdev=7509.98 00:18:58.873 clat (usec): min=177, max=854, avg=234.41, stdev=48.65 00:18:58.873 lat (usec): min=185, max=865, avg=249.53, stdev=49.16 00:18:58.873 clat percentiles (usec): 00:18:58.873 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:18:58.873 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:18:58.873 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 314], 00:18:58.873 | 99.00th=[ 359], 99.50th=[ 416], 99.90th=[ 857], 99.95th=[ 857], 00:18:58.873 | 99.99th=[ 857] 00:18:58.873 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.873 lat (usec) : 250=71.91%, 500=23.78%, 1000=0.19% 00:18:58.873 lat (msec) : 50=4.12% 00:18:58.873 cpu : usr=0.39%, sys=0.78%, ctx=534, majf=0, minf=1 00:18:58.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.873 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.873 job1: (groupid=0, jobs=1): err= 0: pid=1779122: Wed May 15 16:40:06 2024 00:18:58.873 read: IOPS=25, BW=101KiB/s (104kB/s)(104KiB/1028msec) 00:18:58.873 slat (nsec): min=10072, max=33525, avg=22486.04, stdev=8908.86 00:18:58.873 clat (usec): min=424, max=41541, avg=34771.74, stdev=14907.36 00:18:58.873 lat (usec): min=458, max=41573, avg=34794.22, stdev=14902.87 00:18:58.873 clat percentiles (usec): 00:18:58.873 | 1.00th=[ 424], 5.00th=[ 437], 10.00th=[ 545], 20.00th=[40633], 00:18:58.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:58.873 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:58.873 | 99.99th=[41681] 00:18:58.873 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:18:58.873 slat (nsec): min=8512, max=53641, avg=15842.79, stdev=7939.32 00:18:58.873 clat (usec): min=177, max=339, avg=219.90, stdev=22.57 00:18:58.873 lat (usec): min=187, max=350, avg=235.75, stdev=25.40 00:18:58.873 clat percentiles (usec): 00:18:58.873 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:18:58.873 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:18:58.873 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 260], 00:18:58.873 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 338], 00:18:58.873 | 99.99th=[ 338] 00:18:58.873 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.874 lat (usec) : 250=87.73%, 500=7.81%, 750=0.37% 00:18:58.874 lat (msec) : 50=4.09% 00:18:58.874 cpu : usr=1.07%, sys=0.49%, ctx=538, majf=0, minf=1 00:18:58.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.874 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.874 job2: (groupid=0, jobs=1): err= 0: pid=1779129: Wed May 15 16:40:06 2024 00:18:58.874 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:18:58.874 slat (nsec): min=10035, max=35366, avg=20826.00, stdev=8860.60 00:18:58.874 clat (usec): min=40468, max=41042, avg=40947.43, stdev=114.43 00:18:58.874 lat (usec): min=40478, max=41057, avg=40968.26, stdev=115.23 00:18:58.874 clat percentiles (usec): 00:18:58.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:58.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:58.874 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:58.874 | 99.99th=[41157] 00:18:58.874 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:18:58.874 slat (nsec): min=7152, max=55880, avg=16780.71, stdev=8662.39 00:18:58.874 clat (usec): min=175, max=498, avg=250.98, stdev=49.95 00:18:58.874 lat (usec): min=186, max=507, avg=267.76, stdev=50.92 00:18:58.874 clat percentiles (usec): 00:18:58.874 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 210], 00:18:58.874 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 247], 00:18:58.874 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 343], 00:18:58.874 | 99.00th=[ 388], 99.50th=[ 424], 99.90th=[ 498], 99.95th=[ 498], 00:18:58.874 | 99.99th=[ 498] 00:18:58.874 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.874 lat (usec) : 250=58.24%, 500=37.64% 00:18:58.874 lat (msec) : 50=4.12% 00:18:58.874 cpu : usr=0.10%, sys=1.25%, ctx=534, majf=0, minf=1 00:18:58.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.874 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.874 job3: (groupid=0, jobs=1): err= 0: pid=1779130: Wed May 15 16:40:06 2024 00:18:58.874 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:18:58.874 slat (nsec): min=8590, max=49125, avg=22509.68, stdev=10515.37 00:18:58.874 clat (usec): min=40566, max=41068, avg=40946.97, stdev=100.43 00:18:58.874 lat (usec): min=40575, max=41083, avg=40969.48, stdev=100.35 00:18:58.874 clat percentiles (usec): 00:18:58.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:58.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:58.874 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:58.874 | 99.99th=[41157] 00:18:58.874 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:58.874 slat (nsec): min=7291, max=78859, avg=16647.02, stdev=9834.17 00:18:58.874 clat (usec): min=175, max=393, avg=233.69, stdev=47.31 00:18:58.874 lat (usec): min=183, max=466, avg=250.33, stdev=49.68 00:18:58.874 clat percentiles (usec): 00:18:58.874 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:18:58.874 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:18:58.874 | 70.00th=[ 239], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 334], 00:18:58.874 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 396], 99.95th=[ 396], 00:18:58.874 | 99.99th=[ 396] 00:18:58.874 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.874 lat (usec) : 250=71.91%, 500=23.97% 00:18:58.874 lat (msec) : 50=4.12% 00:18:58.874 cpu : usr=0.19%, sys=1.07%, ctx=537, majf=0, minf=2 00:18:58.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.874 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.874 00:18:58.874 Run status group 0 (all jobs): 00:18:58.874 READ: bw=354KiB/s (362kB/s), 84.5KiB/s-101KiB/s (86.6kB/s-104kB/s), io=368KiB (377kB), run=1028-1041msec 00:18:58.874 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-1992KiB/s (2015kB/s-2040kB/s), io=8192KiB (8389kB), run=1028-1041msec 00:18:58.874 00:18:58.874 Disk stats (read/write): 00:18:58.874 nvme0n1: ios=67/512, merge=0/0, ticks=740/121, in_queue=861, util=87.98% 00:18:58.874 nvme0n2: ios=44/512, merge=0/0, ticks=801/107, in_queue=908, util=91.15% 00:18:58.874 nvme0n3: ios=42/512, merge=0/0, ticks=893/125, in_queue=1018, util=90.99% 00:18:58.874 nvme0n4: ios=74/512, merge=0/0, ticks=915/111, in_queue=1026, util=97.68% 00:18:58.874 16:40:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:58.874 [global] 00:18:58.874 thread=1 00:18:58.874 invalidate=1 00:18:58.874 rw=randwrite 00:18:58.874 time_based=1 00:18:58.874 runtime=1 00:18:58.874 ioengine=libaio 00:18:58.874 direct=1 00:18:58.874 bs=4096 00:18:58.874 iodepth=1 00:18:58.874 norandommap=0 00:18:58.874 numjobs=1 00:18:58.874 00:18:58.874 verify_dump=1 00:18:58.874 verify_backlog=512 00:18:58.874 verify_state_save=0 00:18:58.874 do_verify=1 00:18:58.874 verify=crc32c-intel 00:18:58.874 [job0] 00:18:58.874 filename=/dev/nvme0n1 00:18:58.874 [job1] 00:18:58.874 filename=/dev/nvme0n2 00:18:58.874 [job2] 00:18:58.874 filename=/dev/nvme0n3 00:18:58.874 [job3] 00:18:58.874 filename=/dev/nvme0n4 00:18:59.135 Could not set queue depth (nvme0n1) 00:18:59.135 Could not set queue depth (nvme0n2) 00:18:59.135 Could not set queue depth (nvme0n3) 00:18:59.135 Could not set queue depth (nvme0n4) 00:18:59.135 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.135 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.135 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.135 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.135 fio-3.35 00:18:59.135 Starting 4 threads 00:19:00.522 00:19:00.522 job0: (groupid=0, jobs=1): err= 0: pid=1779434: Wed May 15 16:40:07 2024 00:19:00.522 read: IOPS=1253, BW=5015KiB/s (5135kB/s)(5180KiB/1033msec) 00:19:00.522 slat (nsec): min=5053, max=81265, avg=16507.16, stdev=9045.22 00:19:00.522 clat (usec): min=242, max=41765, avg=466.77, stdev=1613.09 00:19:00.522 lat (usec): min=249, max=41773, avg=483.28, stdev=1613.02 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 306], 00:19:00.522 | 30.00th=[ 326], 40.00th=[ 359], 50.00th=[ 404], 60.00th=[ 429], 00:19:00.522 | 70.00th=[ 465], 80.00th=[ 494], 90.00th=[ 537], 95.00th=[ 562], 00:19:00.522 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41681], 00:19:00.522 | 99.99th=[41681] 00:19:00.522 write: IOPS=1486, BW=5948KiB/s (6090kB/s)(6144KiB/1033msec); 0 zone resets 00:19:00.522 slat (nsec): min=7064, max=54156, avg=15528.66, stdev=7061.20 00:19:00.522 clat (usec): min=165, max=3416, avg=239.90, stdev=91.44 00:19:00.522 lat (usec): min=174, max=3436, avg=255.43, stdev=92.45 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 206], 00:19:00.522 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:19:00.522 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 322], 00:19:00.522 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 3425], 00:19:00.522 | 99.99th=[ 3425] 00:19:00.522 bw ( KiB/s): min= 4096, max= 8192, per=34.07%, avg=6144.00, stdev=2896.31, samples=2 00:19:00.522 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:00.522 lat (usec) : 250=40.62%, 500=50.55%, 750=8.58%, 1000=0.07% 00:19:00.522 lat (msec) : 2=0.07%, 4=0.04%, 50=0.07% 00:19:00.522 cpu : usr=4.26%, sys=5.14%, ctx=2831, majf=0, minf=1 00:19:00.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 issued rwts: total=1295,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.522 job1: (groupid=0, jobs=1): err= 0: pid=1779454: Wed May 15 16:40:07 2024 00:19:00.522 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:19:00.522 slat (nsec): min=8238, max=38221, avg=21552.36, stdev=9113.71 00:19:00.522 clat (usec): min=40798, max=41076, avg=40964.90, stdev=59.75 00:19:00.522 lat (usec): min=40807, max=41103, avg=40986.46, stdev=61.16 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:00.522 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:00.522 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:00.522 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:00.522 | 99.99th=[41157] 00:19:00.522 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:19:00.522 slat (nsec): min=6243, max=60377, avg=10905.88, stdev=6099.20 00:19:00.522 clat (usec): min=171, max=375, avg=212.48, stdev=29.14 00:19:00.522 lat (usec): min=178, max=401, avg=223.38, stdev=30.14 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:19:00.522 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:19:00.522 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 265], 00:19:00.522 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 375], 00:19:00.522 | 99.99th=[ 375] 00:19:00.522 bw ( KiB/s): min= 4096, max= 4096, per=22.71%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.522 lat (usec) : 250=89.33%, 500=6.55% 00:19:00.522 lat (msec) : 50=4.12% 00:19:00.522 cpu : usr=0.29%, sys=0.49%, ctx=536, majf=0, minf=1 00:19:00.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.522 job2: (groupid=0, jobs=1): err= 0: pid=1779476: Wed May 15 16:40:07 2024 00:19:00.522 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:00.522 slat (nsec): min=5681, max=64628, avg=13539.36, stdev=6115.65 00:19:00.522 clat (usec): min=265, max=506, avg=336.94, stdev=32.01 00:19:00.522 lat (usec): min=271, max=515, avg=350.48, stdev=34.41 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 314], 00:19:00.522 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:19:00.522 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 396], 00:19:00.522 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 494], 99.95th=[ 506], 00:19:00.522 | 99.99th=[ 506] 00:19:00.522 write: IOPS=1583, BW=6334KiB/s (6486kB/s)(6340KiB/1001msec); 0 zone resets 00:19:00.522 slat (nsec): min=6536, max=68165, avg=18451.12, stdev=8875.26 00:19:00.522 clat (usec): min=188, max=469, avg=264.20, stdev=51.32 00:19:00.522 lat (usec): min=199, max=524, avg=282.66, stdev=54.93 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:19:00.522 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 249], 60.00th=[ 273], 00:19:00.522 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 363], 00:19:00.522 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 461], 99.95th=[ 469], 00:19:00.522 | 99.99th=[ 469] 00:19:00.522 bw ( KiB/s): min= 7936, max= 7936, per=44.01%, avg=7936.00, stdev= 0.00, samples=1 00:19:00.522 iops : min= 1984, max= 1984, avg=1984.00, stdev= 0.00, samples=1 00:19:00.522 lat (usec) : 250=25.44%, 500=74.53%, 750=0.03% 00:19:00.522 cpu : usr=4.20%, sys=5.20%, ctx=3122, majf=0, minf=1 00:19:00.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 issued rwts: total=1536,1585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.522 job3: (groupid=0, jobs=1): err= 0: pid=1779477: Wed May 15 16:40:07 2024 00:19:00.522 read: IOPS=583, BW=2332KiB/s (2388kB/s)(2388KiB/1024msec) 00:19:00.522 slat (nsec): min=5424, max=62977, avg=18543.60, stdev=9923.75 00:19:00.522 clat (usec): min=263, max=41049, avg=1214.07, stdev=5691.10 00:19:00.522 lat (usec): min=274, max=41067, avg=1232.62, stdev=5691.55 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 334], 00:19:00.522 | 30.00th=[ 347], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 396], 00:19:00.522 | 70.00th=[ 424], 80.00th=[ 469], 90.00th=[ 515], 95.00th=[ 553], 00:19:00.522 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:00.522 | 99.99th=[41157] 00:19:00.522 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:19:00.522 slat (nsec): min=6725, max=65413, avg=15553.48, stdev=7535.22 00:19:00.522 clat (usec): min=182, max=540, avg=257.81, stdev=49.47 00:19:00.522 lat (usec): min=192, max=550, avg=273.36, stdev=50.06 00:19:00.522 clat percentiles (usec): 00:19:00.522 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 225], 00:19:00.522 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 251], 00:19:00.522 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 322], 95.00th=[ 383], 00:19:00.522 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 482], 99.95th=[ 537], 00:19:00.522 | 99.99th=[ 537] 00:19:00.522 bw ( KiB/s): min= 4096, max= 4096, per=22.71%, avg=4096.00, stdev= 0.00, samples=2 00:19:00.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:00.522 lat (usec) : 250=37.45%, 500=57.68%, 750=3.70%, 1000=0.06% 00:19:00.522 lat (msec) : 2=0.37%, 50=0.74% 00:19:00.522 cpu : usr=2.25%, sys=2.54%, ctx=1623, majf=0, minf=2 00:19:00.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.523 issued rwts: total=597,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.523 00:19:00.523 Run status group 0 (all jobs): 00:19:00.523 READ: bw=13.0MiB/s (13.7MB/s), 86.4KiB/s-6138KiB/s (88.5kB/s-6285kB/s), io=13.5MiB (14.1MB), run=1001-1033msec 00:19:00.523 WRITE: bw=17.6MiB/s (18.5MB/s), 2012KiB/s-6334KiB/s (2060kB/s-6486kB/s), io=18.2MiB (19.1MB), run=1001-1033msec 00:19:00.523 00:19:00.523 Disk stats (read/write): 00:19:00.523 nvme0n1: ios=1069/1495, merge=0/0, ticks=464/339, in_queue=803, util=87.27% 00:19:00.523 nvme0n2: ios=43/512, merge=0/0, ticks=1683/107, in_queue=1790, util=97.76% 00:19:00.523 nvme0n3: ios=1140/1536, merge=0/0, ticks=1361/381, in_queue=1742, util=98.33% 00:19:00.523 nvme0n4: ios=536/961, merge=0/0, ticks=1516/236, in_queue=1752, util=98.52% 00:19:00.523 16:40:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:00.523 [global] 00:19:00.523 thread=1 00:19:00.523 invalidate=1 00:19:00.523 rw=write 00:19:00.523 time_based=1 00:19:00.523 runtime=1 00:19:00.523 ioengine=libaio 00:19:00.523 direct=1 00:19:00.523 bs=4096 00:19:00.523 iodepth=128 00:19:00.523 norandommap=0 00:19:00.523 numjobs=1 00:19:00.523 00:19:00.523 verify_dump=1 00:19:00.523 verify_backlog=512 00:19:00.523 verify_state_save=0 00:19:00.523 do_verify=1 00:19:00.523 verify=crc32c-intel 00:19:00.523 [job0] 00:19:00.523 filename=/dev/nvme0n1 00:19:00.523 [job1] 00:19:00.523 filename=/dev/nvme0n2 00:19:00.523 [job2] 00:19:00.523 filename=/dev/nvme0n3 00:19:00.523 [job3] 00:19:00.523 filename=/dev/nvme0n4 00:19:00.523 Could not set queue depth (nvme0n1) 00:19:00.523 Could not set queue depth (nvme0n2) 00:19:00.523 Could not set queue depth (nvme0n3) 00:19:00.523 Could not set queue depth (nvme0n4) 00:19:00.782 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.782 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.782 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.782 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.782 fio-3.35 00:19:00.782 Starting 4 threads 00:19:02.159 00:19:02.159 job0: (groupid=0, jobs=1): err= 0: pid=1779706: Wed May 15 16:40:08 2024 00:19:02.159 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:19:02.159 slat (usec): min=2, max=22876, avg=124.59, stdev=967.00 00:19:02.159 clat (usec): min=1949, max=45690, avg=16029.24, stdev=5962.05 00:19:02.159 lat (usec): min=1955, max=45720, avg=16153.83, stdev=6022.73 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 5669], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11338], 00:19:02.159 | 30.00th=[11994], 40.00th=[13042], 50.00th=[14353], 60.00th=[16581], 00:19:02.159 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23987], 95.00th=[26346], 00:19:02.159 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[39060], 00:19:02.159 | 99.99th=[45876] 00:19:02.159 write: IOPS=4319, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1005msec); 0 zone resets 00:19:02.159 slat (usec): min=3, max=10558, avg=95.56, stdev=672.99 00:19:02.159 clat (usec): min=365, max=61158, avg=14195.81, stdev=7559.28 00:19:02.159 lat (usec): min=389, max=61169, avg=14291.37, stdev=7604.17 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 1205], 5.00th=[ 5211], 10.00th=[ 8094], 20.00th=[10683], 00:19:02.159 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:19:02.159 | 70.00th=[14091], 80.00th=[16581], 90.00th=[21627], 95.00th=[23200], 00:19:02.159 | 99.00th=[51119], 99.50th=[58459], 99.90th=[61080], 99.95th=[61080], 00:19:02.159 | 99.99th=[61080] 00:19:02.159 bw ( KiB/s): min=16384, max=17320, per=23.34%, avg=16852.00, stdev=661.85, samples=2 00:19:02.159 iops : min= 4096, max= 4330, avg=4213.00, stdev=165.46, samples=2 00:19:02.159 lat (usec) : 500=0.04%, 750=0.08%, 1000=0.12% 00:19:02.159 lat (msec) : 2=0.60%, 4=0.71%, 10=12.52%, 20=66.49%, 50=18.76% 00:19:02.159 lat (msec) : 100=0.68% 00:19:02.159 cpu : usr=2.99%, sys=6.47%, ctx=357, majf=0, minf=1 00:19:02.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:02.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.159 issued rwts: total=4096,4341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.159 job1: (groupid=0, jobs=1): err= 0: pid=1779708: Wed May 15 16:40:08 2024 00:19:02.159 read: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.3MiB/1009msec) 00:19:02.159 slat (usec): min=2, max=13863, avg=91.30, stdev=642.20 00:19:02.159 clat (usec): min=4265, max=34785, avg=12092.01, stdev=3238.68 00:19:02.159 lat (usec): min=4273, max=34826, avg=12183.31, stdev=3286.08 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 7308], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[ 9896], 00:19:02.159 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:19:02.159 | 70.00th=[12911], 80.00th=[14091], 90.00th=[15926], 95.00th=[19006], 00:19:02.159 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:19:02.159 | 99.99th=[34866] 00:19:02.159 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:19:02.159 slat (usec): min=4, max=11389, avg=81.70, stdev=579.67 00:19:02.159 clat (usec): min=495, max=46985, avg=11553.93, stdev=5327.45 00:19:02.159 lat (usec): min=519, max=47023, avg=11635.63, stdev=5368.47 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 3326], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 8356], 00:19:02.159 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:19:02.159 | 70.00th=[11994], 80.00th=[13042], 90.00th=[14484], 95.00th=[18744], 00:19:02.159 | 99.00th=[40109], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:19:02.159 | 99.99th=[46924] 00:19:02.159 bw ( KiB/s): min=20480, max=24248, per=30.97%, avg=22364.00, stdev=2664.38, samples=2 00:19:02.159 iops : min= 5120, max= 6062, avg=5591.00, stdev=666.09, samples=2 00:19:02.159 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.06% 00:19:02.159 lat (msec) : 4=0.63%, 10=25.85%, 20=69.34%, 50=4.09% 00:19:02.159 cpu : usr=7.84%, sys=10.81%, ctx=435, majf=0, minf=1 00:19:02.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:02.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.159 issued rwts: total=5206,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.159 job2: (groupid=0, jobs=1): err= 0: pid=1779709: Wed May 15 16:40:08 2024 00:19:02.159 read: IOPS=4145, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec) 00:19:02.159 slat (usec): min=3, max=9187, avg=112.86, stdev=617.81 00:19:02.159 clat (usec): min=781, max=29468, avg=14528.59, stdev=3302.17 00:19:02.159 lat (usec): min=7052, max=29503, avg=14641.45, stdev=3334.32 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11600], 20.00th=[12256], 00:19:02.159 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13566], 60.00th=[14091], 00:19:02.159 | 70.00th=[15008], 80.00th=[16581], 90.00th=[19006], 95.00th=[21103], 00:19:02.159 | 99.00th=[26870], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:19:02.159 | 99.99th=[29492] 00:19:02.159 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:19:02.159 slat (usec): min=4, max=11862, avg=102.26, stdev=668.52 00:19:02.159 clat (usec): min=4797, max=31991, avg=14498.80, stdev=3753.63 00:19:02.159 lat (usec): min=4857, max=32012, avg=14601.06, stdev=3809.65 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 7111], 5.00th=[10290], 10.00th=[11863], 20.00th=[12256], 00:19:02.159 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:19:02.159 | 70.00th=[14091], 80.00th=[16450], 90.00th=[21103], 95.00th=[21890], 00:19:02.159 | 99.00th=[27132], 99.50th=[28181], 99.90th=[30802], 99.95th=[31065], 00:19:02.159 | 99.99th=[32113] 00:19:02.159 bw ( KiB/s): min=16384, max=19960, per=25.16%, avg=18172.00, stdev=2528.61, samples=2 00:19:02.159 iops : min= 4096, max= 4990, avg=4543.00, stdev=632.15, samples=2 00:19:02.159 lat (usec) : 1000=0.01% 00:19:02.159 lat (msec) : 10=3.01%, 20=86.87%, 50=10.11% 00:19:02.159 cpu : usr=6.49%, sys=8.78%, ctx=490, majf=0, minf=1 00:19:02.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:02.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.159 issued rwts: total=4158,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.159 job3: (groupid=0, jobs=1): err= 0: pid=1779710: Wed May 15 16:40:08 2024 00:19:02.159 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:19:02.159 slat (usec): min=2, max=16715, avg=138.96, stdev=845.24 00:19:02.159 clat (usec): min=7084, max=50014, avg=17276.20, stdev=6309.92 00:19:02.159 lat (usec): min=7091, max=50022, avg=17415.15, stdev=6388.25 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 7177], 5.00th=[12125], 10.00th=[12780], 20.00th=[13698], 00:19:02.159 | 30.00th=[14746], 40.00th=[15401], 50.00th=[15795], 60.00th=[16909], 00:19:02.159 | 70.00th=[17695], 80.00th=[18744], 90.00th=[20841], 95.00th=[29754], 00:19:02.160 | 99.00th=[43779], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:19:02.160 | 99.99th=[50070] 00:19:02.160 write: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1004msec); 0 zone resets 00:19:02.160 slat (usec): min=3, max=11512, avg=121.69, stdev=687.78 00:19:02.160 clat (usec): min=700, max=43615, avg=17915.24, stdev=8109.04 00:19:02.160 lat (usec): min=734, max=43767, avg=18036.93, stdev=8152.44 00:19:02.160 clat percentiles (usec): 00:19:02.160 | 1.00th=[ 3032], 5.00th=[ 5342], 10.00th=[11863], 20.00th=[13435], 00:19:02.160 | 30.00th=[14222], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:19:02.160 | 70.00th=[17695], 80.00th=[23200], 90.00th=[31851], 95.00th=[36963], 00:19:02.160 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:19:02.160 | 99.99th=[43779] 00:19:02.160 bw ( KiB/s): min=12288, max=16384, per=19.85%, avg=14336.00, stdev=2896.31, samples=2 00:19:02.160 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:02.160 lat (usec) : 750=0.07%, 1000=0.06% 00:19:02.160 lat (msec) : 2=0.19%, 4=0.94%, 10=4.35%, 20=75.68%, 50=18.65% 00:19:02.160 lat (msec) : 100=0.07% 00:19:02.160 cpu : usr=3.59%, sys=8.67%, ctx=357, majf=0, minf=1 00:19:02.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:02.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.160 issued rwts: total=3584,3635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.160 00:19:02.160 Run status group 0 (all jobs): 00:19:02.160 READ: bw=66.0MiB/s (69.2MB/s), 13.9MiB/s-20.2MiB/s (14.6MB/s-21.1MB/s), io=66.6MiB (69.8MB), run=1003-1009msec 00:19:02.160 WRITE: bw=70.5MiB/s (73.9MB/s), 14.1MiB/s-21.8MiB/s (14.8MB/s-22.9MB/s), io=71.2MiB (74.6MB), run=1003-1009msec 00:19:02.160 00:19:02.160 Disk stats (read/write): 00:19:02.160 nvme0n1: ios=3116/3584, merge=0/0, ticks=31796/30232, in_queue=62028, util=86.87% 00:19:02.160 nvme0n2: ios=4360/4608, merge=0/0, ticks=50087/51296, in_queue=101383, util=86.14% 00:19:02.160 nvme0n3: ios=3584/3670, merge=0/0, ticks=24831/26646, in_queue=51477, util=88.75% 00:19:02.160 nvme0n4: ios=2874/3072, merge=0/0, ticks=25799/29473, in_queue=55272, util=89.60% 00:19:02.160 16:40:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:02.160 [global] 00:19:02.160 thread=1 00:19:02.160 invalidate=1 00:19:02.160 rw=randwrite 00:19:02.160 time_based=1 00:19:02.160 runtime=1 00:19:02.160 ioengine=libaio 00:19:02.160 direct=1 00:19:02.160 bs=4096 00:19:02.160 iodepth=128 00:19:02.160 norandommap=0 00:19:02.160 numjobs=1 00:19:02.160 00:19:02.160 verify_dump=1 00:19:02.160 verify_backlog=512 00:19:02.160 verify_state_save=0 00:19:02.160 do_verify=1 00:19:02.160 verify=crc32c-intel 00:19:02.160 [job0] 00:19:02.160 filename=/dev/nvme0n1 00:19:02.160 [job1] 00:19:02.160 filename=/dev/nvme0n2 00:19:02.160 [job2] 00:19:02.160 filename=/dev/nvme0n3 00:19:02.160 [job3] 00:19:02.160 filename=/dev/nvme0n4 00:19:02.160 Could not set queue depth (nvme0n1) 00:19:02.160 Could not set queue depth (nvme0n2) 00:19:02.160 Could not set queue depth (nvme0n3) 00:19:02.160 Could not set queue depth (nvme0n4) 00:19:02.160 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.160 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.160 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.160 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.160 fio-3.35 00:19:02.160 Starting 4 threads 00:19:03.535 00:19:03.535 job0: (groupid=0, jobs=1): err= 0: pid=1779942: Wed May 15 16:40:10 2024 00:19:03.535 read: IOPS=3610, BW=14.1MiB/s (14.8MB/s)(14.7MiB/1044msec) 00:19:03.535 slat (usec): min=3, max=35205, avg=108.05, stdev=910.20 00:19:03.535 clat (msec): min=7, max=105, avg=16.04, stdev=15.05 00:19:03.535 lat (msec): min=7, max=129, avg=16.15, stdev=15.14 00:19:03.535 clat percentiles (msec): 00:19:03.535 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 12], 00:19:03.535 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 13], 00:19:03.535 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 17], 95.00th=[ 29], 00:19:03.535 | 99.00th=[ 105], 99.50th=[ 105], 99.90th=[ 106], 99.95th=[ 106], 00:19:03.535 | 99.99th=[ 106] 00:19:03.535 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:19:03.535 slat (usec): min=3, max=22190, avg=131.83, stdev=878.37 00:19:03.535 clat (usec): min=1883, max=77234, avg=17066.52, stdev=8855.44 00:19:03.535 lat (usec): min=2756, max=77270, avg=17198.35, stdev=8945.39 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 5800], 5.00th=[ 9241], 10.00th=[12387], 20.00th=[13829], 00:19:03.535 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:19:03.535 | 70.00th=[15533], 80.00th=[16188], 90.00th=[25560], 95.00th=[34341], 00:19:03.535 | 99.00th=[59507], 99.50th=[59507], 99.90th=[61604], 99.95th=[64750], 00:19:03.535 | 99.99th=[77071] 00:19:03.535 bw ( KiB/s): min=16384, max=16384, per=27.53%, avg=16384.00, stdev= 0.00, samples=2 00:19:03.535 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:03.535 lat (msec) : 2=0.01%, 4=0.23%, 10=5.54%, 20=84.21%, 50=6.50% 00:19:03.535 lat (msec) : 100=2.71%, 250=0.80% 00:19:03.535 cpu : usr=4.60%, sys=7.96%, ctx=370, majf=0, minf=1 00:19:03.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:03.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.535 issued rwts: total=3769,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.535 job1: (groupid=0, jobs=1): err= 0: pid=1779943: Wed May 15 16:40:10 2024 00:19:03.535 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:03.535 slat (usec): min=2, max=12009, avg=154.61, stdev=866.44 00:19:03.535 clat (usec): min=5585, max=56653, avg=21638.16, stdev=10797.72 00:19:03.535 lat (usec): min=5666, max=57563, avg=21792.77, stdev=10860.66 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 8291], 5.00th=[10552], 10.00th=[10814], 20.00th=[11600], 00:19:03.535 | 30.00th=[12518], 40.00th=[15926], 50.00th=[19006], 60.00th=[23987], 00:19:03.535 | 70.00th=[26084], 80.00th=[30278], 90.00th=[34866], 95.00th=[43254], 00:19:03.535 | 99.00th=[53740], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:19:03.535 | 99.99th=[56886] 00:19:03.535 write: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1003msec); 0 zone resets 00:19:03.535 slat (usec): min=3, max=20291, avg=153.85, stdev=960.88 00:19:03.535 clat (usec): min=558, max=53596, avg=18579.25, stdev=9496.06 00:19:03.535 lat (usec): min=3273, max=53857, avg=18733.10, stdev=9582.37 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 4080], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10421], 00:19:03.535 | 30.00th=[10814], 40.00th=[15926], 50.00th=[16712], 60.00th=[18482], 00:19:03.535 | 70.00th=[20841], 80.00th=[24511], 90.00th=[33817], 95.00th=[37487], 00:19:03.535 | 99.00th=[46400], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:19:03.535 | 99.99th=[53740] 00:19:03.535 bw ( KiB/s): min=12288, max=12632, per=20.94%, avg=12460.00, stdev=243.24, samples=2 00:19:03.535 iops : min= 3072, max= 3158, avg=3115.00, stdev=60.81, samples=2 00:19:03.535 lat (usec) : 750=0.02% 00:19:03.535 lat (msec) : 4=0.44%, 10=7.92%, 20=51.70%, 50=38.43%, 100=1.49% 00:19:03.535 cpu : usr=2.30%, sys=6.29%, ctx=279, majf=0, minf=1 00:19:03.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:03.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.535 issued rwts: total=3072,3243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.535 job2: (groupid=0, jobs=1): err= 0: pid=1779944: Wed May 15 16:40:10 2024 00:19:03.535 read: IOPS=4775, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1005msec) 00:19:03.535 slat (usec): min=2, max=14868, avg=97.88, stdev=623.76 00:19:03.535 clat (usec): min=3850, max=30259, avg=12859.54, stdev=3185.45 00:19:03.535 lat (usec): min=3864, max=30303, avg=12957.42, stdev=3225.60 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11076], 00:19:03.535 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12387], 00:19:03.535 | 70.00th=[13173], 80.00th=[15270], 90.00th=[16909], 95.00th=[19006], 00:19:03.535 | 99.00th=[25297], 99.50th=[26608], 99.90th=[29492], 99.95th=[29492], 00:19:03.535 | 99.99th=[30278] 00:19:03.535 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:19:03.535 slat (usec): min=3, max=10765, avg=89.93, stdev=502.12 00:19:03.535 clat (usec): min=3482, max=32995, avg=12771.82, stdev=4649.48 00:19:03.535 lat (usec): min=3492, max=33014, avg=12861.75, stdev=4684.52 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 5538], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10290], 00:19:03.535 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:19:03.535 | 70.00th=[12780], 80.00th=[14091], 90.00th=[16581], 95.00th=[22676], 00:19:03.535 | 99.00th=[32113], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:19:03.535 | 99.99th=[32900] 00:19:03.535 bw ( KiB/s): min=19024, max=21936, per=34.42%, avg=20480.00, stdev=2059.09, samples=2 00:19:03.535 iops : min= 4756, max= 5484, avg=5120.00, stdev=514.77, samples=2 00:19:03.535 lat (msec) : 4=0.15%, 10=13.10%, 20=82.04%, 50=4.71% 00:19:03.535 cpu : usr=6.67%, sys=9.86%, ctx=447, majf=0, minf=1 00:19:03.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:03.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.535 issued rwts: total=4799,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.535 job3: (groupid=0, jobs=1): err= 0: pid=1779945: Wed May 15 16:40:10 2024 00:19:03.535 read: IOPS=2846, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1003msec) 00:19:03.535 slat (usec): min=2, max=17176, avg=188.70, stdev=1147.70 00:19:03.535 clat (usec): min=527, max=56770, avg=24124.07, stdev=12361.69 00:19:03.535 lat (usec): min=5024, max=56783, avg=24312.77, stdev=12407.51 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 5735], 5.00th=[11469], 10.00th=[12256], 20.00th=[13829], 00:19:03.535 | 30.00th=[14353], 40.00th=[17695], 50.00th=[19268], 60.00th=[23987], 00:19:03.535 | 70.00th=[30278], 80.00th=[34341], 90.00th=[44303], 95.00th=[50594], 00:19:03.535 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:19:03.535 | 99.99th=[56886] 00:19:03.535 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:19:03.535 slat (usec): min=3, max=15727, avg=138.51, stdev=862.92 00:19:03.535 clat (usec): min=4867, max=43689, avg=18999.60, stdev=6734.74 00:19:03.535 lat (usec): min=4876, max=47736, avg=19138.11, stdev=6762.66 00:19:03.535 clat percentiles (usec): 00:19:03.535 | 1.00th=[ 9896], 5.00th=[10421], 10.00th=[11076], 20.00th=[12911], 00:19:03.535 | 30.00th=[13829], 40.00th=[16450], 50.00th=[17957], 60.00th=[19006], 00:19:03.535 | 70.00th=[22676], 80.00th=[25822], 90.00th=[27657], 95.00th=[30802], 00:19:03.535 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:19:03.535 | 99.99th=[43779] 00:19:03.535 bw ( KiB/s): min= 8968, max=15608, per=20.65%, avg=12288.00, stdev=4695.19, samples=2 00:19:03.535 iops : min= 2242, max= 3902, avg=3072.00, stdev=1173.80, samples=2 00:19:03.535 lat (usec) : 750=0.02% 00:19:03.535 lat (msec) : 10=3.24%, 20=53.38%, 50=40.75%, 100=2.62% 00:19:03.535 cpu : usr=4.29%, sys=6.59%, ctx=226, majf=0, minf=1 00:19:03.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:03.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.535 issued rwts: total=2855,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.535 00:19:03.535 Run status group 0 (all jobs): 00:19:03.535 READ: bw=54.2MiB/s (56.9MB/s), 11.1MiB/s-18.7MiB/s (11.7MB/s-19.6MB/s), io=56.6MiB (59.4MB), run=1003-1044msec 00:19:03.535 WRITE: bw=58.1MiB/s (60.9MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=60.7MiB (63.6MB), run=1003-1044msec 00:19:03.535 00:19:03.535 Disk stats (read/write): 00:19:03.535 nvme0n1: ios=3631/3678, merge=0/0, ticks=17609/20077, in_queue=37686, util=99.70% 00:19:03.535 nvme0n2: ios=2287/2560, merge=0/0, ticks=18051/16151, in_queue=34202, util=97.87% 00:19:03.536 nvme0n3: ios=4150/4487, merge=0/0, ticks=26403/30088, in_queue=56491, util=99.90% 00:19:03.536 nvme0n4: ios=2266/2560, merge=0/0, ticks=17086/17559, in_queue=34645, util=99.79% 00:19:03.536 16:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:03.536 16:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1780077 00:19:03.536 16:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:03.536 16:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:03.536 [global] 00:19:03.536 thread=1 00:19:03.536 invalidate=1 00:19:03.536 rw=read 00:19:03.536 time_based=1 00:19:03.536 runtime=10 00:19:03.536 ioengine=libaio 00:19:03.536 direct=1 00:19:03.536 bs=4096 00:19:03.536 iodepth=1 00:19:03.536 norandommap=1 00:19:03.536 numjobs=1 00:19:03.536 00:19:03.536 [job0] 00:19:03.536 filename=/dev/nvme0n1 00:19:03.536 [job1] 00:19:03.536 filename=/dev/nvme0n2 00:19:03.536 [job2] 00:19:03.536 filename=/dev/nvme0n3 00:19:03.536 [job3] 00:19:03.536 filename=/dev/nvme0n4 00:19:03.536 Could not set queue depth (nvme0n1) 00:19:03.536 Could not set queue depth (nvme0n2) 00:19:03.536 Could not set queue depth (nvme0n3) 00:19:03.536 Could not set queue depth (nvme0n4) 00:19:03.536 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.536 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.536 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.536 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.536 fio-3.35 00:19:03.536 Starting 4 threads 00:19:06.813 16:40:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:06.813 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=32632832, buflen=4096 00:19:06.813 fio: pid=1780172, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.813 16:40:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:06.813 16:40:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.813 16:40:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:06.813 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=503808, buflen=4096 00:19:06.813 fio: pid=1780171, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.071 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=364544, buflen=4096 00:19:07.071 fio: pid=1780169, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.071 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.071 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:07.329 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10694656, buflen=4096 00:19:07.329 fio: pid=1780170, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.329 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.329 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:07.329 00:19:07.329 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1780169: Wed May 15 16:40:14 2024 00:19:07.329 read: IOPS=26, BW=105KiB/s (108kB/s)(356KiB/3385msec) 00:19:07.329 slat (usec): min=9, max=14942, avg=188.34, stdev=1572.70 00:19:07.329 clat (usec): min=444, max=41798, avg=37827.19, stdev=10969.96 00:19:07.329 lat (usec): min=469, max=56027, avg=38017.50, stdev=11131.09 00:19:07.329 clat percentiles (usec): 00:19:07.329 | 1.00th=[ 445], 5.00th=[ 506], 10.00th=[40633], 20.00th=[41157], 00:19:07.329 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:07.329 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:07.329 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:07.329 | 99.99th=[41681] 00:19:07.329 bw ( KiB/s): min= 96, max= 120, per=0.89%, avg=106.67, stdev= 8.26, samples=6 00:19:07.329 iops : min= 24, max= 30, avg=26.67, stdev= 2.07, samples=6 00:19:07.329 lat (usec) : 500=3.33%, 750=4.44% 00:19:07.329 lat (msec) : 50=91.11% 00:19:07.329 cpu : usr=0.12%, sys=0.00%, ctx=93, majf=0, minf=1 00:19:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.329 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1780170: Wed May 15 16:40:14 2024 00:19:07.329 read: IOPS=719, BW=2876KiB/s (2945kB/s)(10.2MiB/3631msec) 00:19:07.329 slat (usec): min=4, max=12378, avg=27.77, stdev=340.80 00:19:07.329 clat (usec): min=271, max=41458, avg=1358.28, stdev=6240.64 00:19:07.329 lat (usec): min=278, max=52755, avg=1381.32, stdev=6275.69 00:19:07.329 clat percentiles (usec): 00:19:07.329 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 322], 00:19:07.329 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 375], 60.00th=[ 379], 00:19:07.329 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 498], 95.00th=[ 519], 00:19:07.329 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:19:07.329 | 99.99th=[41681] 00:19:07.329 bw ( KiB/s): min= 104, max=10080, per=25.04%, avg=2976.86, stdev=3568.84, samples=7 00:19:07.329 iops : min= 26, max= 2520, avg=744.14, stdev=892.28, samples=7 00:19:07.329 lat (usec) : 500=91.50%, 750=6.01% 00:19:07.329 lat (msec) : 4=0.04%, 50=2.41% 00:19:07.329 cpu : usr=0.58%, sys=1.43%, ctx=2619, majf=0, minf=1 00:19:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.329 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1780171: Wed May 15 16:40:14 2024 00:19:07.329 read: IOPS=39, BW=156KiB/s (160kB/s)(492KiB/3152msec) 00:19:07.329 slat (nsec): min=7866, max=51934, avg=23576.50, stdev=10337.87 00:19:07.329 clat (usec): min=412, max=42942, avg=25556.61, stdev=19738.98 00:19:07.329 lat (usec): min=425, max=42964, avg=25580.26, stdev=19738.78 00:19:07.329 clat percentiles (usec): 00:19:07.329 | 1.00th=[ 461], 5.00th=[ 494], 10.00th=[ 515], 20.00th=[ 537], 00:19:07.329 | 30.00th=[ 619], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:19:07.329 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:07.329 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:07.329 | 99.99th=[42730] 00:19:07.329 bw ( KiB/s): min= 112, max= 216, per=1.33%, avg=158.67, stdev=45.37, samples=6 00:19:07.329 iops : min= 28, max= 54, avg=39.67, stdev=11.34, samples=6 00:19:07.329 lat (usec) : 500=7.26%, 750=29.84%, 1000=0.81% 00:19:07.329 lat (msec) : 50=61.29% 00:19:07.329 cpu : usr=0.13%, sys=0.06%, ctx=127, majf=0, minf=1 00:19:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.329 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1780172: Wed May 15 16:40:14 2024 00:19:07.329 read: IOPS=2782, BW=10.9MiB/s (11.4MB/s)(31.1MiB/2864msec) 00:19:07.329 slat (nsec): min=5236, max=67623, avg=11496.86, stdev=5682.75 00:19:07.329 clat (usec): min=273, max=948, avg=344.94, stdev=34.73 00:19:07.329 lat (usec): min=279, max=964, avg=356.43, stdev=37.71 00:19:07.329 clat percentiles (usec): 00:19:07.329 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:19:07.329 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:19:07.329 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 400], 00:19:07.329 | 99.00th=[ 465], 99.50th=[ 490], 99.90th=[ 523], 99.95th=[ 676], 00:19:07.329 | 99.99th=[ 947] 00:19:07.329 bw ( KiB/s): min=10392, max=11688, per=92.82%, avg=11033.60, stdev=478.11, samples=5 00:19:07.329 iops : min= 2598, max= 2922, avg=2758.40, stdev=119.53, samples=5 00:19:07.329 lat (usec) : 500=99.72%, 750=0.25%, 1000=0.01% 00:19:07.329 cpu : usr=2.58%, sys=4.61%, ctx=7968, majf=0, minf=1 00:19:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.329 issued rwts: total=7968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.329 00:19:07.329 Run status group 0 (all jobs): 00:19:07.329 READ: bw=11.6MiB/s (12.2MB/s), 105KiB/s-10.9MiB/s (108kB/s-11.4MB/s), io=42.1MiB (44.2MB), run=2864-3631msec 00:19:07.329 00:19:07.329 Disk stats (read/write): 00:19:07.329 nvme0n1: ios=88/0, merge=0/0, ticks=3328/0, in_queue=3328, util=95.54% 00:19:07.329 nvme0n2: ios=2609/0, merge=0/0, ticks=3466/0, in_queue=3466, util=96.17% 00:19:07.329 nvme0n3: ios=171/0, merge=0/0, ticks=4137/0, in_queue=4137, util=99.56% 00:19:07.329 nvme0n4: ios=7934/0, merge=0/0, ticks=2670/0, in_queue=2670, util=96.75% 00:19:07.587 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.587 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:07.843 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.843 16:40:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:08.101 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.102 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:08.359 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.359 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:08.617 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:08.617 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1780077 00:19:08.617 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:08.617 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:08.875 nvmf hotplug test: fio failed as expected 00:19:08.875 16:40:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.132 rmmod nvme_tcp 00:19:09.132 rmmod nvme_fabrics 00:19:09.132 rmmod nvme_keyring 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1778172 ']' 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1778172 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1778172 ']' 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1778172 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1778172 00:19:09.132 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:09.133 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:09.133 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1778172' 00:19:09.133 killing process with pid 1778172 00:19:09.133 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1778172 00:19:09.133 [2024-05-15 16:40:16.227342] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:09.133 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1778172 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.390 16:40:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.289 16:40:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.289 00:19:11.289 real 0m23.872s 00:19:11.289 user 1m19.707s 00:19:11.289 sys 0m7.750s 00:19:11.289 16:40:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:11.289 16:40:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.289 ************************************ 00:19:11.289 END TEST nvmf_fio_target 00:19:11.289 ************************************ 00:19:11.547 16:40:18 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:11.547 16:40:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:11.547 16:40:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:11.547 16:40:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:11.547 ************************************ 00:19:11.547 START TEST nvmf_bdevio 00:19:11.547 ************************************ 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:11.547 * Looking for test storage... 00:19:11.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:11.547 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.548 16:40:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:14.075 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:14.075 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:14.075 Found net devices under 0000:09:00.0: cvl_0_0 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:14.075 Found net devices under 0000:09:00.1: cvl_0_1 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:19:14.075 00:19:14.075 --- 10.0.0.2 ping statistics --- 00:19:14.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.075 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:19:14.075 00:19:14.075 --- 10.0.0.1 ping statistics --- 00:19:14.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.075 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.075 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1783196 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1783196 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1783196 ']' 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.076 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.333 [2024-05-15 16:40:21.331163] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:19:14.333 [2024-05-15 16:40:21.331274] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.333 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.333 [2024-05-15 16:40:21.408069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.333 [2024-05-15 16:40:21.493465] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.333 [2024-05-15 16:40:21.493537] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.333 [2024-05-15 16:40:21.493551] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.334 [2024-05-15 16:40:21.493562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.334 [2024-05-15 16:40:21.493572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.334 [2024-05-15 16:40:21.493691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:14.334 [2024-05-15 16:40:21.493755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:14.334 [2024-05-15 16:40:21.493825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:14.334 [2024-05-15 16:40:21.493827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 [2024-05-15 16:40:21.655017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 Malloc0 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 [2024-05-15 16:40:21.708596] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:14.591 [2024-05-15 16:40:21.708931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:14.591 { 00:19:14.591 "params": { 00:19:14.591 "name": "Nvme$subsystem", 00:19:14.591 "trtype": "$TEST_TRANSPORT", 00:19:14.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.591 "adrfam": "ipv4", 00:19:14.591 "trsvcid": "$NVMF_PORT", 00:19:14.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.591 "hdgst": ${hdgst:-false}, 00:19:14.591 "ddgst": ${ddgst:-false} 00:19:14.591 }, 00:19:14.591 "method": "bdev_nvme_attach_controller" 00:19:14.591 } 00:19:14.591 EOF 00:19:14.591 )") 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:14.591 16:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:14.591 "params": { 00:19:14.591 "name": "Nvme1", 00:19:14.591 "trtype": "tcp", 00:19:14.591 "traddr": "10.0.0.2", 00:19:14.591 "adrfam": "ipv4", 00:19:14.591 "trsvcid": "4420", 00:19:14.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.591 "hdgst": false, 00:19:14.591 "ddgst": false 00:19:14.591 }, 00:19:14.591 "method": "bdev_nvme_attach_controller" 00:19:14.591 }' 00:19:14.591 [2024-05-15 16:40:21.752444] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:19:14.591 [2024-05-15 16:40:21.752547] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783224 ] 00:19:14.591 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.848 [2024-05-15 16:40:21.822780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:14.848 [2024-05-15 16:40:21.911170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.848 [2024-05-15 16:40:21.911227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.848 [2024-05-15 16:40:21.911231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.106 I/O targets: 00:19:15.106 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:15.106 00:19:15.106 00:19:15.106 CUnit - A unit testing framework for C - Version 2.1-3 00:19:15.106 http://cunit.sourceforge.net/ 00:19:15.106 00:19:15.106 00:19:15.106 Suite: bdevio tests on: Nvme1n1 00:19:15.106 Test: blockdev write read block ...passed 00:19:15.106 Test: blockdev write zeroes read block ...passed 00:19:15.106 Test: blockdev write zeroes read no split ...passed 00:19:15.106 Test: blockdev write zeroes read split ...passed 00:19:15.107 Test: blockdev write zeroes read split partial ...passed 00:19:15.107 Test: blockdev reset ...[2024-05-15 16:40:22.332390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.107 [2024-05-15 16:40:22.332510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213f8c0 (9): Bad file descriptor 00:19:15.364 [2024-05-15 16:40:22.390963] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:15.364 passed 00:19:15.364 Test: blockdev write read 8 blocks ...passed 00:19:15.364 Test: blockdev write read size > 128k ...passed 00:19:15.364 Test: blockdev write read invalid size ...passed 00:19:15.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:15.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:15.364 Test: blockdev write read max offset ...passed 00:19:15.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:15.364 Test: blockdev writev readv 8 blocks ...passed 00:19:15.364 Test: blockdev writev readv 30 x 1block ...passed 00:19:15.622 Test: blockdev writev readv block ...passed 00:19:15.622 Test: blockdev writev readv size > 128k ...passed 00:19:15.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:15.622 Test: blockdev comparev and writev ...[2024-05-15 16:40:22.604224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.604261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.604298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.604326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.604719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.604753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.604781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.605136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.605164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.605198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.605235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.605624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.605650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.605685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.622 [2024-05-15 16:40:22.605713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:15.622 passed 00:19:15.622 Test: blockdev nvme passthru rw ...passed 00:19:15.622 Test: blockdev nvme passthru vendor specific ...[2024-05-15 16:40:22.689534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.622 [2024-05-15 16:40:22.689564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.689775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.622 [2024-05-15 16:40:22.689808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.690016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.622 [2024-05-15 16:40:22.690042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:15.622 [2024-05-15 16:40:22.690245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.622 [2024-05-15 16:40:22.690271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:15.622 passed 00:19:15.622 Test: blockdev nvme admin passthru ...passed 00:19:15.622 Test: blockdev copy ...passed 00:19:15.622 00:19:15.622 Run Summary: Type Total Ran Passed Failed Inactive 00:19:15.622 suites 1 1 n/a 0 0 00:19:15.622 tests 23 23 23 0 0 00:19:15.622 asserts 152 152 152 0 n/a 00:19:15.622 00:19:15.622 Elapsed time = 1.064 seconds 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.879 16:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.879 rmmod nvme_tcp 00:19:15.879 rmmod nvme_fabrics 00:19:15.879 rmmod nvme_keyring 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1783196 ']' 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1783196 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1783196 ']' 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1783196 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1783196 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1783196' 00:19:15.879 killing process with pid 1783196 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1783196 00:19:15.879 [2024-05-15 16:40:23.040111] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:15.879 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1783196 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.137 16:40:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.705 16:40:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.705 00:19:18.705 real 0m6.793s 00:19:18.705 user 0m10.346s 00:19:18.705 sys 0m2.431s 00:19:18.705 16:40:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:18.705 16:40:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.705 ************************************ 00:19:18.705 END TEST nvmf_bdevio 00:19:18.705 ************************************ 00:19:18.705 16:40:25 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:18.705 16:40:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:18.705 16:40:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:18.705 16:40:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:18.705 ************************************ 00:19:18.705 START TEST nvmf_auth_target 00:19:18.705 ************************************ 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:18.705 * Looking for test storage... 00:19:18.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.705 16:40:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.706 16:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.235 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.235 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.235 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:21.236 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:21.236 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:21.236 Found net devices under 0000:09:00.0: cvl_0_0 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:21.236 Found net devices under 0000:09:00.1: cvl_0_1 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.236 16:40:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:19:21.236 00:19:21.236 --- 10.0.0.2 ping statistics --- 00:19:21.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.236 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:19:21.236 00:19:21.236 --- 10.0.0.1 ping statistics --- 00:19:21.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.236 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1785700 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1785700 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1785700 ']' 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.236 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=1785733 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=27e8740333529d9e9828cc9156efe295f84e9cb7493c50c6 00:19:21.237 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vzx 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 27e8740333529d9e9828cc9156efe295f84e9cb7493c50c6 0 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 27e8740333529d9e9828cc9156efe295f84e9cb7493c50c6 0 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=27e8740333529d9e9828cc9156efe295f84e9cb7493c50c6 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vzx 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vzx 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.vzx 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9c164da61e503dd60866ef230345712d 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kAF 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9c164da61e503dd60866ef230345712d 1 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9c164da61e503dd60866ef230345712d 1 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9c164da61e503dd60866ef230345712d 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kAF 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kAF 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.kAF 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=68d482ae393f13aab6d77ac3f9d032952d49a6d4c3f91a24 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NYz 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 68d482ae393f13aab6d77ac3f9d032952d49a6d4c3f91a24 2 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 68d482ae393f13aab6d77ac3f9d032952d49a6d4c3f91a24 2 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=68d482ae393f13aab6d77ac3f9d032952d49a6d4c3f91a24 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NYz 00:19:21.495 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NYz 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.NYz 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cec173e437317e9721d66eeb610c91c2671bcdea5fc56605c830653362233acd 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D67 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cec173e437317e9721d66eeb610c91c2671bcdea5fc56605c830653362233acd 3 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cec173e437317e9721d66eeb610c91c2671bcdea5fc56605c830653362233acd 3 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cec173e437317e9721d66eeb610c91c2671bcdea5fc56605c830653362233acd 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D67 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D67 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.D67 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 1785700 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1785700 ']' 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.496 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 1785733 /var/tmp/host.sock 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1785733 ']' 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:21.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.754 16:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vzx 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vzx 00:19:22.012 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vzx 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kAF 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kAF 00:19:22.269 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kAF 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NYz 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.NYz 00:19:22.527 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.NYz 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.D67 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.D67 00:19:22.784 16:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.D67 00:19:23.042 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:23.042 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.043 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.043 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.043 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:23.300 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:23.559 00:19:23.559 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:23.559 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:23.559 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:23.817 { 00:19:23.817 "cntlid": 1, 00:19:23.817 "qid": 0, 00:19:23.817 "state": "enabled", 00:19:23.817 "listen_address": { 00:19:23.817 "trtype": "TCP", 00:19:23.817 "adrfam": "IPv4", 00:19:23.817 "traddr": "10.0.0.2", 00:19:23.817 "trsvcid": "4420" 00:19:23.817 }, 00:19:23.817 "peer_address": { 00:19:23.817 "trtype": "TCP", 00:19:23.817 "adrfam": "IPv4", 00:19:23.817 "traddr": "10.0.0.1", 00:19:23.817 "trsvcid": "50536" 00:19:23.817 }, 00:19:23.817 "auth": { 00:19:23.817 "state": "completed", 00:19:23.817 "digest": "sha256", 00:19:23.817 "dhgroup": "null" 00:19:23.817 } 00:19:23.817 } 00:19:23.817 ]' 00:19:23.817 16:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:23.817 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.817 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:24.075 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:24.075 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:24.075 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.075 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.075 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.333 16:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:25.266 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:25.830 00:19:25.830 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:25.830 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:25.830 16:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:25.830 { 00:19:25.830 "cntlid": 3, 00:19:25.830 "qid": 0, 00:19:25.830 "state": "enabled", 00:19:25.830 "listen_address": { 00:19:25.830 "trtype": "TCP", 00:19:25.830 "adrfam": "IPv4", 00:19:25.830 "traddr": "10.0.0.2", 00:19:25.830 "trsvcid": "4420" 00:19:25.830 }, 00:19:25.830 "peer_address": { 00:19:25.830 "trtype": "TCP", 00:19:25.830 "adrfam": "IPv4", 00:19:25.830 "traddr": "10.0.0.1", 00:19:25.830 "trsvcid": "41122" 00:19:25.830 }, 00:19:25.830 "auth": { 00:19:25.830 "state": "completed", 00:19:25.830 "digest": "sha256", 00:19:25.830 "dhgroup": "null" 00:19:25.830 } 00:19:25.830 } 00:19:25.830 ]' 00:19:25.830 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.089 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.346 16:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.279 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:27.537 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:27.794 00:19:27.794 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:27.794 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:27.794 16:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:28.052 { 00:19:28.052 "cntlid": 5, 00:19:28.052 "qid": 0, 00:19:28.052 "state": "enabled", 00:19:28.052 "listen_address": { 00:19:28.052 "trtype": "TCP", 00:19:28.052 "adrfam": "IPv4", 00:19:28.052 "traddr": "10.0.0.2", 00:19:28.052 "trsvcid": "4420" 00:19:28.052 }, 00:19:28.052 "peer_address": { 00:19:28.052 "trtype": "TCP", 00:19:28.052 "adrfam": "IPv4", 00:19:28.052 "traddr": "10.0.0.1", 00:19:28.052 "trsvcid": "41146" 00:19:28.052 }, 00:19:28.052 "auth": { 00:19:28.052 "state": "completed", 00:19:28.052 "digest": "sha256", 00:19:28.052 "dhgroup": "null" 00:19:28.052 } 00:19:28.052 } 00:19:28.052 ]' 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:28.052 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:28.309 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.309 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.309 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.566 16:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.501 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.758 16:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.016 00:19:30.016 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.016 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.016 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:30.274 { 00:19:30.274 "cntlid": 7, 00:19:30.274 "qid": 0, 00:19:30.274 "state": "enabled", 00:19:30.274 "listen_address": { 00:19:30.274 "trtype": "TCP", 00:19:30.274 "adrfam": "IPv4", 00:19:30.274 "traddr": "10.0.0.2", 00:19:30.274 "trsvcid": "4420" 00:19:30.274 }, 00:19:30.274 "peer_address": { 00:19:30.274 "trtype": "TCP", 00:19:30.274 "adrfam": "IPv4", 00:19:30.274 "traddr": "10.0.0.1", 00:19:30.274 "trsvcid": "41172" 00:19:30.274 }, 00:19:30.274 "auth": { 00:19:30.274 "state": "completed", 00:19:30.274 "digest": "sha256", 00:19:30.274 "dhgroup": "null" 00:19:30.274 } 00:19:30.274 } 00:19:30.274 ]' 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.274 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:30.532 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:30.532 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:30.532 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.532 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.532 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.791 16:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.725 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:31.983 16:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:32.242 00:19:32.242 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:32.242 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:32.242 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:32.500 { 00:19:32.500 "cntlid": 9, 00:19:32.500 "qid": 0, 00:19:32.500 "state": "enabled", 00:19:32.500 "listen_address": { 00:19:32.500 "trtype": "TCP", 00:19:32.500 "adrfam": "IPv4", 00:19:32.500 "traddr": "10.0.0.2", 00:19:32.500 "trsvcid": "4420" 00:19:32.500 }, 00:19:32.500 "peer_address": { 00:19:32.500 "trtype": "TCP", 00:19:32.500 "adrfam": "IPv4", 00:19:32.500 "traddr": "10.0.0.1", 00:19:32.500 "trsvcid": "41216" 00:19:32.500 }, 00:19:32.500 "auth": { 00:19:32.500 "state": "completed", 00:19:32.500 "digest": "sha256", 00:19:32.500 "dhgroup": "ffdhe2048" 00:19:32.500 } 00:19:32.500 } 00:19:32.500 ]' 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.500 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:32.758 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.758 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.758 16:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.047 16:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.986 16:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:34.244 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:34.502 00:19:34.502 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:34.502 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.502 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:34.760 { 00:19:34.760 "cntlid": 11, 00:19:34.760 "qid": 0, 00:19:34.760 "state": "enabled", 00:19:34.760 "listen_address": { 00:19:34.760 "trtype": "TCP", 00:19:34.760 "adrfam": "IPv4", 00:19:34.760 "traddr": "10.0.0.2", 00:19:34.760 "trsvcid": "4420" 00:19:34.760 }, 00:19:34.760 "peer_address": { 00:19:34.760 "trtype": "TCP", 00:19:34.760 "adrfam": "IPv4", 00:19:34.760 "traddr": "10.0.0.1", 00:19:34.760 "trsvcid": "55912" 00:19:34.760 }, 00:19:34.760 "auth": { 00:19:34.760 "state": "completed", 00:19:34.760 "digest": "sha256", 00:19:34.760 "dhgroup": "ffdhe2048" 00:19:34.760 } 00:19:34.760 } 00:19:34.760 ]' 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.760 16:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.017 16:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.949 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.206 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.464 00:19:36.464 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:36.464 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:36.464 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:36.721 { 00:19:36.721 "cntlid": 13, 00:19:36.721 "qid": 0, 00:19:36.721 "state": "enabled", 00:19:36.721 "listen_address": { 00:19:36.721 "trtype": "TCP", 00:19:36.721 "adrfam": "IPv4", 00:19:36.721 "traddr": "10.0.0.2", 00:19:36.721 "trsvcid": "4420" 00:19:36.721 }, 00:19:36.721 "peer_address": { 00:19:36.721 "trtype": "TCP", 00:19:36.721 "adrfam": "IPv4", 00:19:36.721 "traddr": "10.0.0.1", 00:19:36.721 "trsvcid": "55932" 00:19:36.721 }, 00:19:36.721 "auth": { 00:19:36.721 "state": "completed", 00:19:36.721 "digest": "sha256", 00:19:36.721 "dhgroup": "ffdhe2048" 00:19:36.721 } 00:19:36.721 } 00:19:36.721 ]' 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.721 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:36.978 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.978 16:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:36.978 16:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.978 16:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.978 16:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.235 16:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.167 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.424 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.719 00:19:38.719 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:38.719 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:38.719 16:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:38.977 { 00:19:38.977 "cntlid": 15, 00:19:38.977 "qid": 0, 00:19:38.977 "state": "enabled", 00:19:38.977 "listen_address": { 00:19:38.977 "trtype": "TCP", 00:19:38.977 "adrfam": "IPv4", 00:19:38.977 "traddr": "10.0.0.2", 00:19:38.977 "trsvcid": "4420" 00:19:38.977 }, 00:19:38.977 "peer_address": { 00:19:38.977 "trtype": "TCP", 00:19:38.977 "adrfam": "IPv4", 00:19:38.977 "traddr": "10.0.0.1", 00:19:38.977 "trsvcid": "55948" 00:19:38.977 }, 00:19:38.977 "auth": { 00:19:38.977 "state": "completed", 00:19:38.977 "digest": "sha256", 00:19:38.977 "dhgroup": "ffdhe2048" 00:19:38.977 } 00:19:38.977 } 00:19:38.977 ]' 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.977 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.539 16:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:40.470 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:40.471 16:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:41.037 00:19:41.037 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:41.037 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:41.037 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.295 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.295 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.295 16:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.295 16:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.295 16:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.295 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:41.296 { 00:19:41.296 "cntlid": 17, 00:19:41.296 "qid": 0, 00:19:41.296 "state": "enabled", 00:19:41.296 "listen_address": { 00:19:41.296 "trtype": "TCP", 00:19:41.296 "adrfam": "IPv4", 00:19:41.296 "traddr": "10.0.0.2", 00:19:41.296 "trsvcid": "4420" 00:19:41.296 }, 00:19:41.296 "peer_address": { 00:19:41.296 "trtype": "TCP", 00:19:41.296 "adrfam": "IPv4", 00:19:41.296 "traddr": "10.0.0.1", 00:19:41.296 "trsvcid": "55968" 00:19:41.296 }, 00:19:41.296 "auth": { 00:19:41.296 "state": "completed", 00:19:41.296 "digest": "sha256", 00:19:41.296 "dhgroup": "ffdhe3072" 00:19:41.296 } 00:19:41.296 } 00:19:41.296 ]' 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.296 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.553 16:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.484 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:43.050 16:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:43.307 00:19:43.307 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:43.307 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:43.307 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.564 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.564 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.564 16:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:43.565 { 00:19:43.565 "cntlid": 19, 00:19:43.565 "qid": 0, 00:19:43.565 "state": "enabled", 00:19:43.565 "listen_address": { 00:19:43.565 "trtype": "TCP", 00:19:43.565 "adrfam": "IPv4", 00:19:43.565 "traddr": "10.0.0.2", 00:19:43.565 "trsvcid": "4420" 00:19:43.565 }, 00:19:43.565 "peer_address": { 00:19:43.565 "trtype": "TCP", 00:19:43.565 "adrfam": "IPv4", 00:19:43.565 "traddr": "10.0.0.1", 00:19:43.565 "trsvcid": "55996" 00:19:43.565 }, 00:19:43.565 "auth": { 00:19:43.565 "state": "completed", 00:19:43.565 "digest": "sha256", 00:19:43.565 "dhgroup": "ffdhe3072" 00:19:43.565 } 00:19:43.565 } 00:19:43.565 ]' 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.565 16:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.822 16:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:19:44.754 16:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.755 16:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:45.013 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:45.580 00:19:45.580 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:45.580 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.580 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:45.580 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.580 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:45.850 { 00:19:45.850 "cntlid": 21, 00:19:45.850 "qid": 0, 00:19:45.850 "state": "enabled", 00:19:45.850 "listen_address": { 00:19:45.850 "trtype": "TCP", 00:19:45.850 "adrfam": "IPv4", 00:19:45.850 "traddr": "10.0.0.2", 00:19:45.850 "trsvcid": "4420" 00:19:45.850 }, 00:19:45.850 "peer_address": { 00:19:45.850 "trtype": "TCP", 00:19:45.850 "adrfam": "IPv4", 00:19:45.850 "traddr": "10.0.0.1", 00:19:45.850 "trsvcid": "36840" 00:19:45.850 }, 00:19:45.850 "auth": { 00:19:45.850 "state": "completed", 00:19:45.850 "digest": "sha256", 00:19:45.850 "dhgroup": "ffdhe3072" 00:19:45.850 } 00:19:45.850 } 00:19:45.850 ]' 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.850 16:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.112 16:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:19:47.044 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.044 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.044 16:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.044 16:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.044 16:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.044 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:47.045 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.045 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.303 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.868 00:19:47.868 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:47.868 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:47.868 16:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:47.868 { 00:19:47.868 "cntlid": 23, 00:19:47.868 "qid": 0, 00:19:47.868 "state": "enabled", 00:19:47.868 "listen_address": { 00:19:47.868 "trtype": "TCP", 00:19:47.868 "adrfam": "IPv4", 00:19:47.868 "traddr": "10.0.0.2", 00:19:47.868 "trsvcid": "4420" 00:19:47.868 }, 00:19:47.868 "peer_address": { 00:19:47.868 "trtype": "TCP", 00:19:47.868 "adrfam": "IPv4", 00:19:47.868 "traddr": "10.0.0.1", 00:19:47.868 "trsvcid": "36878" 00:19:47.868 }, 00:19:47.868 "auth": { 00:19:47.868 "state": "completed", 00:19:47.868 "digest": "sha256", 00:19:47.868 "dhgroup": "ffdhe3072" 00:19:47.868 } 00:19:47.868 } 00:19:47.868 ]' 00:19:47.868 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.130 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.436 16:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:19:49.368 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.369 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:49.627 16:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:49.886 00:19:49.886 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:49.886 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:49.886 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.144 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.144 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.144 16:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.144 16:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:50.402 { 00:19:50.402 "cntlid": 25, 00:19:50.402 "qid": 0, 00:19:50.402 "state": "enabled", 00:19:50.402 "listen_address": { 00:19:50.402 "trtype": "TCP", 00:19:50.402 "adrfam": "IPv4", 00:19:50.402 "traddr": "10.0.0.2", 00:19:50.402 "trsvcid": "4420" 00:19:50.402 }, 00:19:50.402 "peer_address": { 00:19:50.402 "trtype": "TCP", 00:19:50.402 "adrfam": "IPv4", 00:19:50.402 "traddr": "10.0.0.1", 00:19:50.402 "trsvcid": "36906" 00:19:50.402 }, 00:19:50.402 "auth": { 00:19:50.402 "state": "completed", 00:19:50.402 "digest": "sha256", 00:19:50.402 "dhgroup": "ffdhe4096" 00:19:50.402 } 00:19:50.402 } 00:19:50.402 ]' 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.402 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.660 16:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.592 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:51.851 16:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:52.109 00:19:52.367 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:52.367 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.367 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:52.625 { 00:19:52.625 "cntlid": 27, 00:19:52.625 "qid": 0, 00:19:52.625 "state": "enabled", 00:19:52.625 "listen_address": { 00:19:52.625 "trtype": "TCP", 00:19:52.625 "adrfam": "IPv4", 00:19:52.625 "traddr": "10.0.0.2", 00:19:52.625 "trsvcid": "4420" 00:19:52.625 }, 00:19:52.625 "peer_address": { 00:19:52.625 "trtype": "TCP", 00:19:52.625 "adrfam": "IPv4", 00:19:52.625 "traddr": "10.0.0.1", 00:19:52.625 "trsvcid": "36928" 00:19:52.625 }, 00:19:52.625 "auth": { 00:19:52.625 "state": "completed", 00:19:52.625 "digest": "sha256", 00:19:52.625 "dhgroup": "ffdhe4096" 00:19:52.625 } 00:19:52.625 } 00:19:52.625 ]' 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.625 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.883 16:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.816 16:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.073 16:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.074 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:54.074 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:54.638 00:19:54.638 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:54.638 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:54.638 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:54.896 { 00:19:54.896 "cntlid": 29, 00:19:54.896 "qid": 0, 00:19:54.896 "state": "enabled", 00:19:54.896 "listen_address": { 00:19:54.896 "trtype": "TCP", 00:19:54.896 "adrfam": "IPv4", 00:19:54.896 "traddr": "10.0.0.2", 00:19:54.896 "trsvcid": "4420" 00:19:54.896 }, 00:19:54.896 "peer_address": { 00:19:54.896 "trtype": "TCP", 00:19:54.896 "adrfam": "IPv4", 00:19:54.896 "traddr": "10.0.0.1", 00:19:54.896 "trsvcid": "51208" 00:19:54.896 }, 00:19:54.896 "auth": { 00:19:54.896 "state": "completed", 00:19:54.896 "digest": "sha256", 00:19:54.896 "dhgroup": "ffdhe4096" 00:19:54.896 } 00:19:54.896 } 00:19:54.896 ]' 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.896 16:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.896 16:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.896 16:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.896 16:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.153 16:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.086 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.344 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.910 00:19:56.910 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.910 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.910 16:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.910 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.910 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.910 16:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.910 16:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:57.170 { 00:19:57.170 "cntlid": 31, 00:19:57.170 "qid": 0, 00:19:57.170 "state": "enabled", 00:19:57.170 "listen_address": { 00:19:57.170 "trtype": "TCP", 00:19:57.170 "adrfam": "IPv4", 00:19:57.170 "traddr": "10.0.0.2", 00:19:57.170 "trsvcid": "4420" 00:19:57.170 }, 00:19:57.170 "peer_address": { 00:19:57.170 "trtype": "TCP", 00:19:57.170 "adrfam": "IPv4", 00:19:57.170 "traddr": "10.0.0.1", 00:19:57.170 "trsvcid": "51242" 00:19:57.170 }, 00:19:57.170 "auth": { 00:19:57.170 "state": "completed", 00:19:57.170 "digest": "sha256", 00:19:57.170 "dhgroup": "ffdhe4096" 00:19:57.170 } 00:19:57.170 } 00:19:57.170 ]' 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.170 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.427 16:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.360 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.617 16:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:59.185 00:19:59.185 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:59.185 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:59.185 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:59.444 { 00:19:59.444 "cntlid": 33, 00:19:59.444 "qid": 0, 00:19:59.444 "state": "enabled", 00:19:59.444 "listen_address": { 00:19:59.444 "trtype": "TCP", 00:19:59.444 "adrfam": "IPv4", 00:19:59.444 "traddr": "10.0.0.2", 00:19:59.444 "trsvcid": "4420" 00:19:59.444 }, 00:19:59.444 "peer_address": { 00:19:59.444 "trtype": "TCP", 00:19:59.444 "adrfam": "IPv4", 00:19:59.444 "traddr": "10.0.0.1", 00:19:59.444 "trsvcid": "51274" 00:19:59.444 }, 00:19:59.444 "auth": { 00:19:59.444 "state": "completed", 00:19:59.444 "digest": "sha256", 00:19:59.444 "dhgroup": "ffdhe6144" 00:19:59.444 } 00:19:59.444 } 00:19:59.444 ]' 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.444 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.703 16:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.637 16:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:00.895 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:01.461 00:20:01.461 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:01.462 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:01.462 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:01.720 { 00:20:01.720 "cntlid": 35, 00:20:01.720 "qid": 0, 00:20:01.720 "state": "enabled", 00:20:01.720 "listen_address": { 00:20:01.720 "trtype": "TCP", 00:20:01.720 "adrfam": "IPv4", 00:20:01.720 "traddr": "10.0.0.2", 00:20:01.720 "trsvcid": "4420" 00:20:01.720 }, 00:20:01.720 "peer_address": { 00:20:01.720 "trtype": "TCP", 00:20:01.720 "adrfam": "IPv4", 00:20:01.720 "traddr": "10.0.0.1", 00:20:01.720 "trsvcid": "51310" 00:20:01.720 }, 00:20:01.720 "auth": { 00:20:01.720 "state": "completed", 00:20:01.720 "digest": "sha256", 00:20:01.720 "dhgroup": "ffdhe6144" 00:20:01.720 } 00:20:01.720 } 00:20:01.720 ]' 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.720 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:01.721 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.721 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:01.721 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.721 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.721 16:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.979 16:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.912 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.170 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:20:03.170 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:03.170 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.170 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.170 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.171 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:03.171 16:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.171 16:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.171 16:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.171 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:03.171 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:03.779 00:20:03.779 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:03.779 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.779 16:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:04.041 { 00:20:04.041 "cntlid": 37, 00:20:04.041 "qid": 0, 00:20:04.041 "state": "enabled", 00:20:04.041 "listen_address": { 00:20:04.041 "trtype": "TCP", 00:20:04.041 "adrfam": "IPv4", 00:20:04.041 "traddr": "10.0.0.2", 00:20:04.041 "trsvcid": "4420" 00:20:04.041 }, 00:20:04.041 "peer_address": { 00:20:04.041 "trtype": "TCP", 00:20:04.041 "adrfam": "IPv4", 00:20:04.041 "traddr": "10.0.0.1", 00:20:04.041 "trsvcid": "51324" 00:20:04.041 }, 00:20:04.041 "auth": { 00:20:04.041 "state": "completed", 00:20:04.041 "digest": "sha256", 00:20:04.041 "dhgroup": "ffdhe6144" 00:20:04.041 } 00:20:04.041 } 00:20:04.041 ]' 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.041 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:04.298 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.298 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:04.298 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.298 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.298 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.556 16:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.489 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.746 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:20:05.746 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:05.746 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.746 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.747 16:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.311 00:20:06.311 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:06.311 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.311 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:06.569 { 00:20:06.569 "cntlid": 39, 00:20:06.569 "qid": 0, 00:20:06.569 "state": "enabled", 00:20:06.569 "listen_address": { 00:20:06.569 "trtype": "TCP", 00:20:06.569 "adrfam": "IPv4", 00:20:06.569 "traddr": "10.0.0.2", 00:20:06.569 "trsvcid": "4420" 00:20:06.569 }, 00:20:06.569 "peer_address": { 00:20:06.569 "trtype": "TCP", 00:20:06.569 "adrfam": "IPv4", 00:20:06.569 "traddr": "10.0.0.1", 00:20:06.569 "trsvcid": "60626" 00:20:06.569 }, 00:20:06.569 "auth": { 00:20:06.569 "state": "completed", 00:20:06.569 "digest": "sha256", 00:20:06.569 "dhgroup": "ffdhe6144" 00:20:06.569 } 00:20:06.569 } 00:20:06.569 ]' 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.569 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.827 16:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.757 16:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.013 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.943 00:20:08.943 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:08.943 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:08.943 16:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:09.201 { 00:20:09.201 "cntlid": 41, 00:20:09.201 "qid": 0, 00:20:09.201 "state": "enabled", 00:20:09.201 "listen_address": { 00:20:09.201 "trtype": "TCP", 00:20:09.201 "adrfam": "IPv4", 00:20:09.201 "traddr": "10.0.0.2", 00:20:09.201 "trsvcid": "4420" 00:20:09.201 }, 00:20:09.201 "peer_address": { 00:20:09.201 "trtype": "TCP", 00:20:09.201 "adrfam": "IPv4", 00:20:09.201 "traddr": "10.0.0.1", 00:20:09.201 "trsvcid": "60656" 00:20:09.201 }, 00:20:09.201 "auth": { 00:20:09.201 "state": "completed", 00:20:09.201 "digest": "sha256", 00:20:09.201 "dhgroup": "ffdhe8192" 00:20:09.201 } 00:20:09.201 } 00:20:09.201 ]' 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.201 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.459 16:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.393 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.650 16:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.908 16:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.908 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:10.908 16:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:11.842 00:20:11.842 16:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:11.842 16:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:11.842 16:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:11.842 { 00:20:11.842 "cntlid": 43, 00:20:11.842 "qid": 0, 00:20:11.842 "state": "enabled", 00:20:11.842 "listen_address": { 00:20:11.842 "trtype": "TCP", 00:20:11.842 "adrfam": "IPv4", 00:20:11.842 "traddr": "10.0.0.2", 00:20:11.842 "trsvcid": "4420" 00:20:11.842 }, 00:20:11.842 "peer_address": { 00:20:11.842 "trtype": "TCP", 00:20:11.842 "adrfam": "IPv4", 00:20:11.842 "traddr": "10.0.0.1", 00:20:11.842 "trsvcid": "60690" 00:20:11.842 }, 00:20:11.842 "auth": { 00:20:11.842 "state": "completed", 00:20:11.842 "digest": "sha256", 00:20:11.842 "dhgroup": "ffdhe8192" 00:20:11.842 } 00:20:11.842 } 00:20:11.842 ]' 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.842 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:12.099 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.099 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:12.099 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.100 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.100 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.358 16:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.290 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:13.548 16:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:14.481 00:20:14.481 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:14.481 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.481 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:14.739 { 00:20:14.739 "cntlid": 45, 00:20:14.739 "qid": 0, 00:20:14.739 "state": "enabled", 00:20:14.739 "listen_address": { 00:20:14.739 "trtype": "TCP", 00:20:14.739 "adrfam": "IPv4", 00:20:14.739 "traddr": "10.0.0.2", 00:20:14.739 "trsvcid": "4420" 00:20:14.739 }, 00:20:14.739 "peer_address": { 00:20:14.739 "trtype": "TCP", 00:20:14.739 "adrfam": "IPv4", 00:20:14.739 "traddr": "10.0.0.1", 00:20:14.739 "trsvcid": "60702" 00:20:14.739 }, 00:20:14.739 "auth": { 00:20:14.739 "state": "completed", 00:20:14.739 "digest": "sha256", 00:20:14.739 "dhgroup": "ffdhe8192" 00:20:14.739 } 00:20:14.739 } 00:20:14.739 ]' 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.739 16:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.996 16:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.926 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.183 16:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.114 00:20:17.114 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:17.114 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:17.114 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.371 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.371 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.371 16:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.371 16:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.371 16:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.371 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:17.371 { 00:20:17.371 "cntlid": 47, 00:20:17.371 "qid": 0, 00:20:17.371 "state": "enabled", 00:20:17.371 "listen_address": { 00:20:17.371 "trtype": "TCP", 00:20:17.371 "adrfam": "IPv4", 00:20:17.371 "traddr": "10.0.0.2", 00:20:17.371 "trsvcid": "4420" 00:20:17.371 }, 00:20:17.371 "peer_address": { 00:20:17.371 "trtype": "TCP", 00:20:17.371 "adrfam": "IPv4", 00:20:17.371 "traddr": "10.0.0.1", 00:20:17.371 "trsvcid": "41330" 00:20:17.371 }, 00:20:17.371 "auth": { 00:20:17.371 "state": "completed", 00:20:17.371 "digest": "sha256", 00:20:17.371 "dhgroup": "ffdhe8192" 00:20:17.372 } 00:20:17.372 } 00:20:17.372 ]' 00:20:17.372 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:17.372 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.372 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:17.372 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.372 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:17.629 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.629 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.629 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.886 16:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:20:18.820 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.820 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.821 16:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.821 16:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.078 16:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.078 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:19.078 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:19.335 00:20:19.335 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:19.335 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.335 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:19.592 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:19.593 { 00:20:19.593 "cntlid": 49, 00:20:19.593 "qid": 0, 00:20:19.593 "state": "enabled", 00:20:19.593 "listen_address": { 00:20:19.593 "trtype": "TCP", 00:20:19.593 "adrfam": "IPv4", 00:20:19.593 "traddr": "10.0.0.2", 00:20:19.593 "trsvcid": "4420" 00:20:19.593 }, 00:20:19.593 "peer_address": { 00:20:19.593 "trtype": "TCP", 00:20:19.593 "adrfam": "IPv4", 00:20:19.593 "traddr": "10.0.0.1", 00:20:19.593 "trsvcid": "41376" 00:20:19.593 }, 00:20:19.593 "auth": { 00:20:19.593 "state": "completed", 00:20:19.593 "digest": "sha384", 00:20:19.593 "dhgroup": "null" 00:20:19.593 } 00:20:19.593 } 00:20:19.593 ]' 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.593 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.850 16:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.785 16:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.043 16:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.044 16:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.044 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:21.044 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:21.302 00:20:21.302 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:21.302 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:21.302 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.559 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.559 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.559 16:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.559 16:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.559 16:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.559 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:21.559 { 00:20:21.559 "cntlid": 51, 00:20:21.559 "qid": 0, 00:20:21.559 "state": "enabled", 00:20:21.559 "listen_address": { 00:20:21.559 "trtype": "TCP", 00:20:21.559 "adrfam": "IPv4", 00:20:21.559 "traddr": "10.0.0.2", 00:20:21.559 "trsvcid": "4420" 00:20:21.559 }, 00:20:21.559 "peer_address": { 00:20:21.560 "trtype": "TCP", 00:20:21.560 "adrfam": "IPv4", 00:20:21.560 "traddr": "10.0.0.1", 00:20:21.560 "trsvcid": "41404" 00:20:21.560 }, 00:20:21.560 "auth": { 00:20:21.560 "state": "completed", 00:20:21.560 "digest": "sha384", 00:20:21.560 "dhgroup": "null" 00:20:21.560 } 00:20:21.560 } 00:20:21.560 ]' 00:20:21.560 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:21.560 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.560 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:21.560 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:21.560 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:21.817 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.817 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.817 16:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.074 16:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.005 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:23.262 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:23.520 00:20:23.520 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:23.520 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:23.520 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:23.777 { 00:20:23.777 "cntlid": 53, 00:20:23.777 "qid": 0, 00:20:23.777 "state": "enabled", 00:20:23.777 "listen_address": { 00:20:23.777 "trtype": "TCP", 00:20:23.777 "adrfam": "IPv4", 00:20:23.777 "traddr": "10.0.0.2", 00:20:23.777 "trsvcid": "4420" 00:20:23.777 }, 00:20:23.777 "peer_address": { 00:20:23.777 "trtype": "TCP", 00:20:23.777 "adrfam": "IPv4", 00:20:23.777 "traddr": "10.0.0.1", 00:20:23.777 "trsvcid": "41434" 00:20:23.777 }, 00:20:23.777 "auth": { 00:20:23.777 "state": "completed", 00:20:23.777 "digest": "sha384", 00:20:23.777 "dhgroup": "null" 00:20:23.777 } 00:20:23.777 } 00:20:23.777 ]' 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.777 16:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:24.035 16:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:24.035 16:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:24.035 16:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.035 16:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.035 16:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.292 16:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.224 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.481 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.738 00:20:25.738 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:25.738 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.738 16:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:25.995 { 00:20:25.995 "cntlid": 55, 00:20:25.995 "qid": 0, 00:20:25.995 "state": "enabled", 00:20:25.995 "listen_address": { 00:20:25.995 "trtype": "TCP", 00:20:25.995 "adrfam": "IPv4", 00:20:25.995 "traddr": "10.0.0.2", 00:20:25.995 "trsvcid": "4420" 00:20:25.995 }, 00:20:25.995 "peer_address": { 00:20:25.995 "trtype": "TCP", 00:20:25.995 "adrfam": "IPv4", 00:20:25.995 "traddr": "10.0.0.1", 00:20:25.995 "trsvcid": "47506" 00:20:25.995 }, 00:20:25.995 "auth": { 00:20:25.995 "state": "completed", 00:20:25.995 "digest": "sha384", 00:20:25.995 "dhgroup": "null" 00:20:25.995 } 00:20:25.995 } 00:20:25.995 ]' 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.995 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:26.252 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:26.252 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:26.252 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.252 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.252 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.510 16:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.441 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.699 16:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.956 00:20:27.956 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.956 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.956 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:28.214 { 00:20:28.214 "cntlid": 57, 00:20:28.214 "qid": 0, 00:20:28.214 "state": "enabled", 00:20:28.214 "listen_address": { 00:20:28.214 "trtype": "TCP", 00:20:28.214 "adrfam": "IPv4", 00:20:28.214 "traddr": "10.0.0.2", 00:20:28.214 "trsvcid": "4420" 00:20:28.214 }, 00:20:28.214 "peer_address": { 00:20:28.214 "trtype": "TCP", 00:20:28.214 "adrfam": "IPv4", 00:20:28.214 "traddr": "10.0.0.1", 00:20:28.214 "trsvcid": "47534" 00:20:28.214 }, 00:20:28.214 "auth": { 00:20:28.214 "state": "completed", 00:20:28.214 "digest": "sha384", 00:20:28.214 "dhgroup": "ffdhe2048" 00:20:28.214 } 00:20:28.214 } 00:20:28.214 ]' 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.214 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.471 16:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.403 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.661 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:20:29.661 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:29.661 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.661 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.661 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.662 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:29.662 16:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.662 16:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.662 16:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.662 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:29.662 16:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:29.919 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.177 16:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:30.435 { 00:20:30.435 "cntlid": 59, 00:20:30.435 "qid": 0, 00:20:30.435 "state": "enabled", 00:20:30.435 "listen_address": { 00:20:30.435 "trtype": "TCP", 00:20:30.435 "adrfam": "IPv4", 00:20:30.435 "traddr": "10.0.0.2", 00:20:30.435 "trsvcid": "4420" 00:20:30.435 }, 00:20:30.435 "peer_address": { 00:20:30.435 "trtype": "TCP", 00:20:30.435 "adrfam": "IPv4", 00:20:30.435 "traddr": "10.0.0.1", 00:20:30.435 "trsvcid": "47564" 00:20:30.435 }, 00:20:30.435 "auth": { 00:20:30.435 "state": "completed", 00:20:30.435 "digest": "sha384", 00:20:30.435 "dhgroup": "ffdhe2048" 00:20:30.435 } 00:20:30.435 } 00:20:30.435 ]' 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.435 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.692 16:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.623 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:31.880 16:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.138 00:20:32.138 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:32.138 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:32.138 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.394 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.394 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.394 16:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.394 16:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.651 { 00:20:32.651 "cntlid": 61, 00:20:32.651 "qid": 0, 00:20:32.651 "state": "enabled", 00:20:32.651 "listen_address": { 00:20:32.651 "trtype": "TCP", 00:20:32.651 "adrfam": "IPv4", 00:20:32.651 "traddr": "10.0.0.2", 00:20:32.651 "trsvcid": "4420" 00:20:32.651 }, 00:20:32.651 "peer_address": { 00:20:32.651 "trtype": "TCP", 00:20:32.651 "adrfam": "IPv4", 00:20:32.651 "traddr": "10.0.0.1", 00:20:32.651 "trsvcid": "47596" 00:20:32.651 }, 00:20:32.651 "auth": { 00:20:32.651 "state": "completed", 00:20:32.651 "digest": "sha384", 00:20:32.651 "dhgroup": "ffdhe2048" 00:20:32.651 } 00:20:32.651 } 00:20:32.651 ]' 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.651 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.652 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.908 16:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.870 16:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.127 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.384 00:20:34.384 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:34.384 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:34.384 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:34.642 { 00:20:34.642 "cntlid": 63, 00:20:34.642 "qid": 0, 00:20:34.642 "state": "enabled", 00:20:34.642 "listen_address": { 00:20:34.642 "trtype": "TCP", 00:20:34.642 "adrfam": "IPv4", 00:20:34.642 "traddr": "10.0.0.2", 00:20:34.642 "trsvcid": "4420" 00:20:34.642 }, 00:20:34.642 "peer_address": { 00:20:34.642 "trtype": "TCP", 00:20:34.642 "adrfam": "IPv4", 00:20:34.642 "traddr": "10.0.0.1", 00:20:34.642 "trsvcid": "36192" 00:20:34.642 }, 00:20:34.642 "auth": { 00:20:34.642 "state": "completed", 00:20:34.642 "digest": "sha384", 00:20:34.642 "dhgroup": "ffdhe2048" 00:20:34.642 } 00:20:34.642 } 00:20:34.642 ]' 00:20:34.642 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.899 16:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.157 16:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.089 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:36.346 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:36.910 00:20:36.910 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:36.910 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:36.910 16:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:36.910 { 00:20:36.910 "cntlid": 65, 00:20:36.910 "qid": 0, 00:20:36.910 "state": "enabled", 00:20:36.910 "listen_address": { 00:20:36.910 "trtype": "TCP", 00:20:36.910 "adrfam": "IPv4", 00:20:36.910 "traddr": "10.0.0.2", 00:20:36.910 "trsvcid": "4420" 00:20:36.910 }, 00:20:36.910 "peer_address": { 00:20:36.910 "trtype": "TCP", 00:20:36.910 "adrfam": "IPv4", 00:20:36.910 "traddr": "10.0.0.1", 00:20:36.910 "trsvcid": "36222" 00:20:36.910 }, 00:20:36.910 "auth": { 00:20:36.910 "state": "completed", 00:20:36.910 "digest": "sha384", 00:20:36.910 "dhgroup": "ffdhe3072" 00:20:36.910 } 00:20:36.910 } 00:20:36.910 ]' 00:20:36.910 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.168 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.425 16:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.356 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:38.612 16:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:38.869 00:20:38.869 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:38.869 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:38.869 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:39.126 { 00:20:39.126 "cntlid": 67, 00:20:39.126 "qid": 0, 00:20:39.126 "state": "enabled", 00:20:39.126 "listen_address": { 00:20:39.126 "trtype": "TCP", 00:20:39.126 "adrfam": "IPv4", 00:20:39.126 "traddr": "10.0.0.2", 00:20:39.126 "trsvcid": "4420" 00:20:39.126 }, 00:20:39.126 "peer_address": { 00:20:39.126 "trtype": "TCP", 00:20:39.126 "adrfam": "IPv4", 00:20:39.126 "traddr": "10.0.0.1", 00:20:39.126 "trsvcid": "36256" 00:20:39.126 }, 00:20:39.126 "auth": { 00:20:39.126 "state": "completed", 00:20:39.126 "digest": "sha384", 00:20:39.126 "dhgroup": "ffdhe3072" 00:20:39.126 } 00:20:39.126 } 00:20:39.126 ]' 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.126 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:39.383 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.383 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:39.383 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.383 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.383 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.640 16:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.571 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.828 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:40.829 16:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:41.086 00:20:41.086 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:41.086 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:41.086 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:41.344 { 00:20:41.344 "cntlid": 69, 00:20:41.344 "qid": 0, 00:20:41.344 "state": "enabled", 00:20:41.344 "listen_address": { 00:20:41.344 "trtype": "TCP", 00:20:41.344 "adrfam": "IPv4", 00:20:41.344 "traddr": "10.0.0.2", 00:20:41.344 "trsvcid": "4420" 00:20:41.344 }, 00:20:41.344 "peer_address": { 00:20:41.344 "trtype": "TCP", 00:20:41.344 "adrfam": "IPv4", 00:20:41.344 "traddr": "10.0.0.1", 00:20:41.344 "trsvcid": "36282" 00:20:41.344 }, 00:20:41.344 "auth": { 00:20:41.344 "state": "completed", 00:20:41.344 "digest": "sha384", 00:20:41.344 "dhgroup": "ffdhe3072" 00:20:41.344 } 00:20:41.344 } 00:20:41.344 ]' 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.344 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.601 16:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.532 16:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.789 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.354 00:20:43.354 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:43.354 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:43.354 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:43.612 { 00:20:43.612 "cntlid": 71, 00:20:43.612 "qid": 0, 00:20:43.612 "state": "enabled", 00:20:43.612 "listen_address": { 00:20:43.612 "trtype": "TCP", 00:20:43.612 "adrfam": "IPv4", 00:20:43.612 "traddr": "10.0.0.2", 00:20:43.612 "trsvcid": "4420" 00:20:43.612 }, 00:20:43.612 "peer_address": { 00:20:43.612 "trtype": "TCP", 00:20:43.612 "adrfam": "IPv4", 00:20:43.612 "traddr": "10.0.0.1", 00:20:43.612 "trsvcid": "36304" 00:20:43.612 }, 00:20:43.612 "auth": { 00:20:43.612 "state": "completed", 00:20:43.612 "digest": "sha384", 00:20:43.612 "dhgroup": "ffdhe3072" 00:20:43.612 } 00:20:43.612 } 00:20:43.612 ]' 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.612 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.869 16:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.798 16:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.054 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:45.055 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:45.619 00:20:45.619 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:45.619 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:45.619 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.876 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.876 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.876 16:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.876 16:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.876 16:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.876 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:45.876 { 00:20:45.876 "cntlid": 73, 00:20:45.876 "qid": 0, 00:20:45.876 "state": "enabled", 00:20:45.876 "listen_address": { 00:20:45.876 "trtype": "TCP", 00:20:45.876 "adrfam": "IPv4", 00:20:45.876 "traddr": "10.0.0.2", 00:20:45.876 "trsvcid": "4420" 00:20:45.876 }, 00:20:45.876 "peer_address": { 00:20:45.876 "trtype": "TCP", 00:20:45.876 "adrfam": "IPv4", 00:20:45.877 "traddr": "10.0.0.1", 00:20:45.877 "trsvcid": "39530" 00:20:45.877 }, 00:20:45.877 "auth": { 00:20:45.877 "state": "completed", 00:20:45.877 "digest": "sha384", 00:20:45.877 "dhgroup": "ffdhe4096" 00:20:45.877 } 00:20:45.877 } 00:20:45.877 ]' 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.877 16:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.134 16:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.067 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:47.325 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:47.891 00:20:47.891 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:47.891 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:47.891 16:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:47.891 { 00:20:47.891 "cntlid": 75, 00:20:47.891 "qid": 0, 00:20:47.891 "state": "enabled", 00:20:47.891 "listen_address": { 00:20:47.891 "trtype": "TCP", 00:20:47.891 "adrfam": "IPv4", 00:20:47.891 "traddr": "10.0.0.2", 00:20:47.891 "trsvcid": "4420" 00:20:47.891 }, 00:20:47.891 "peer_address": { 00:20:47.891 "trtype": "TCP", 00:20:47.891 "adrfam": "IPv4", 00:20:47.891 "traddr": "10.0.0.1", 00:20:47.891 "trsvcid": "39556" 00:20:47.891 }, 00:20:47.891 "auth": { 00:20:47.891 "state": "completed", 00:20:47.891 "digest": "sha384", 00:20:47.891 "dhgroup": "ffdhe4096" 00:20:47.891 } 00:20:47.891 } 00:20:47.891 ]' 00:20:47.891 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.154 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.411 16:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.344 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.602 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:49.603 16:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:49.861 00:20:49.861 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:49.861 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.861 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:50.119 { 00:20:50.119 "cntlid": 77, 00:20:50.119 "qid": 0, 00:20:50.119 "state": "enabled", 00:20:50.119 "listen_address": { 00:20:50.119 "trtype": "TCP", 00:20:50.119 "adrfam": "IPv4", 00:20:50.119 "traddr": "10.0.0.2", 00:20:50.119 "trsvcid": "4420" 00:20:50.119 }, 00:20:50.119 "peer_address": { 00:20:50.119 "trtype": "TCP", 00:20:50.119 "adrfam": "IPv4", 00:20:50.119 "traddr": "10.0.0.1", 00:20:50.119 "trsvcid": "39586" 00:20:50.119 }, 00:20:50.119 "auth": { 00:20:50.119 "state": "completed", 00:20:50.119 "digest": "sha384", 00:20:50.119 "dhgroup": "ffdhe4096" 00:20:50.119 } 00:20:50.119 } 00:20:50.119 ]' 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.119 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:50.377 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.377 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:50.377 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.377 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.377 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.635 16:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.568 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.826 16:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.084 00:20:52.084 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:52.084 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:52.084 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.342 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.342 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:52.343 { 00:20:52.343 "cntlid": 79, 00:20:52.343 "qid": 0, 00:20:52.343 "state": "enabled", 00:20:52.343 "listen_address": { 00:20:52.343 "trtype": "TCP", 00:20:52.343 "adrfam": "IPv4", 00:20:52.343 "traddr": "10.0.0.2", 00:20:52.343 "trsvcid": "4420" 00:20:52.343 }, 00:20:52.343 "peer_address": { 00:20:52.343 "trtype": "TCP", 00:20:52.343 "adrfam": "IPv4", 00:20:52.343 "traddr": "10.0.0.1", 00:20:52.343 "trsvcid": "39614" 00:20:52.343 }, 00:20:52.343 "auth": { 00:20:52.343 "state": "completed", 00:20:52.343 "digest": "sha384", 00:20:52.343 "dhgroup": "ffdhe4096" 00:20:52.343 } 00:20:52.343 } 00:20:52.343 ]' 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.343 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:52.600 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.600 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.600 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.600 16:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.973 16:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:53.973 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:54.538 00:20:54.538 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:54.538 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:54.538 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:54.795 { 00:20:54.795 "cntlid": 81, 00:20:54.795 "qid": 0, 00:20:54.795 "state": "enabled", 00:20:54.795 "listen_address": { 00:20:54.795 "trtype": "TCP", 00:20:54.795 "adrfam": "IPv4", 00:20:54.795 "traddr": "10.0.0.2", 00:20:54.795 "trsvcid": "4420" 00:20:54.795 }, 00:20:54.795 "peer_address": { 00:20:54.795 "trtype": "TCP", 00:20:54.795 "adrfam": "IPv4", 00:20:54.795 "traddr": "10.0.0.1", 00:20:54.795 "trsvcid": "38270" 00:20:54.795 }, 00:20:54.795 "auth": { 00:20:54.795 "state": "completed", 00:20:54.795 "digest": "sha384", 00:20:54.795 "dhgroup": "ffdhe6144" 00:20:54.795 } 00:20:54.795 } 00:20:54.795 ]' 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.795 16:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.053 16:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.985 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:56.242 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:56.807 00:20:56.807 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:56.807 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:56.807 16:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:57.065 { 00:20:57.065 "cntlid": 83, 00:20:57.065 "qid": 0, 00:20:57.065 "state": "enabled", 00:20:57.065 "listen_address": { 00:20:57.065 "trtype": "TCP", 00:20:57.065 "adrfam": "IPv4", 00:20:57.065 "traddr": "10.0.0.2", 00:20:57.065 "trsvcid": "4420" 00:20:57.065 }, 00:20:57.065 "peer_address": { 00:20:57.065 "trtype": "TCP", 00:20:57.065 "adrfam": "IPv4", 00:20:57.065 "traddr": "10.0.0.1", 00:20:57.065 "trsvcid": "38296" 00:20:57.065 }, 00:20:57.065 "auth": { 00:20:57.065 "state": "completed", 00:20:57.065 "digest": "sha384", 00:20:57.065 "dhgroup": "ffdhe6144" 00:20:57.065 } 00:20:57.065 } 00:20:57.065 ]' 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.065 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:57.323 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.323 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:57.323 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.323 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.323 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.581 16:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.513 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.771 16:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.372 00:20:59.372 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:59.372 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.372 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:59.630 { 00:20:59.630 "cntlid": 85, 00:20:59.630 "qid": 0, 00:20:59.630 "state": "enabled", 00:20:59.630 "listen_address": { 00:20:59.630 "trtype": "TCP", 00:20:59.630 "adrfam": "IPv4", 00:20:59.630 "traddr": "10.0.0.2", 00:20:59.630 "trsvcid": "4420" 00:20:59.630 }, 00:20:59.630 "peer_address": { 00:20:59.630 "trtype": "TCP", 00:20:59.630 "adrfam": "IPv4", 00:20:59.630 "traddr": "10.0.0.1", 00:20:59.630 "trsvcid": "38328" 00:20:59.630 }, 00:20:59.630 "auth": { 00:20:59.630 "state": "completed", 00:20:59.630 "digest": "sha384", 00:20:59.630 "dhgroup": "ffdhe6144" 00:20:59.630 } 00:20:59.630 } 00:20:59.630 ]' 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.630 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:59.888 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.888 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.888 16:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.145 16:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:01.075 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.075 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.075 16:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.075 16:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.075 16:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.075 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:01.076 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.076 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.333 16:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.334 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.334 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.899 00:21:01.899 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:01.899 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:01.899 16:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.156 { 00:21:02.156 "cntlid": 87, 00:21:02.156 "qid": 0, 00:21:02.156 "state": "enabled", 00:21:02.156 "listen_address": { 00:21:02.156 "trtype": "TCP", 00:21:02.156 "adrfam": "IPv4", 00:21:02.156 "traddr": "10.0.0.2", 00:21:02.156 "trsvcid": "4420" 00:21:02.156 }, 00:21:02.156 "peer_address": { 00:21:02.156 "trtype": "TCP", 00:21:02.156 "adrfam": "IPv4", 00:21:02.156 "traddr": "10.0.0.1", 00:21:02.156 "trsvcid": "38358" 00:21:02.156 }, 00:21:02.156 "auth": { 00:21:02.156 "state": "completed", 00:21:02.156 "digest": "sha384", 00:21:02.156 "dhgroup": "ffdhe6144" 00:21:02.156 } 00:21:02.156 } 00:21:02.156 ]' 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.156 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.157 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.157 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.157 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.157 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.157 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.157 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.415 16:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.404 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.661 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:21:03.661 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:03.661 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.661 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.661 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.662 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:03.662 16:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.662 16:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.662 16:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.662 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:03.662 16:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:04.593 00:21:04.593 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:04.593 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:04.593 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.851 { 00:21:04.851 "cntlid": 89, 00:21:04.851 "qid": 0, 00:21:04.851 "state": "enabled", 00:21:04.851 "listen_address": { 00:21:04.851 "trtype": "TCP", 00:21:04.851 "adrfam": "IPv4", 00:21:04.851 "traddr": "10.0.0.2", 00:21:04.851 "trsvcid": "4420" 00:21:04.851 }, 00:21:04.851 "peer_address": { 00:21:04.851 "trtype": "TCP", 00:21:04.851 "adrfam": "IPv4", 00:21:04.851 "traddr": "10.0.0.1", 00:21:04.851 "trsvcid": "38402" 00:21:04.851 }, 00:21:04.851 "auth": { 00:21:04.851 "state": "completed", 00:21:04.851 "digest": "sha384", 00:21:04.851 "dhgroup": "ffdhe8192" 00:21:04.851 } 00:21:04.851 } 00:21:04.851 ]' 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.851 16:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:04.851 16:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.851 16:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.851 16:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.108 16:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.039 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:06.296 16:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:07.229 00:21:07.229 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:07.229 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.229 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:07.486 { 00:21:07.486 "cntlid": 91, 00:21:07.486 "qid": 0, 00:21:07.486 "state": "enabled", 00:21:07.486 "listen_address": { 00:21:07.486 "trtype": "TCP", 00:21:07.486 "adrfam": "IPv4", 00:21:07.486 "traddr": "10.0.0.2", 00:21:07.486 "trsvcid": "4420" 00:21:07.486 }, 00:21:07.486 "peer_address": { 00:21:07.486 "trtype": "TCP", 00:21:07.486 "adrfam": "IPv4", 00:21:07.486 "traddr": "10.0.0.1", 00:21:07.486 "trsvcid": "41200" 00:21:07.486 }, 00:21:07.486 "auth": { 00:21:07.486 "state": "completed", 00:21:07.486 "digest": "sha384", 00:21:07.486 "dhgroup": "ffdhe8192" 00:21:07.486 } 00:21:07.486 } 00:21:07.486 ]' 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.486 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.744 16:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.677 16:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.934 16:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.935 16:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.935 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.935 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:09.866 00:21:09.866 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:09.866 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:09.866 16:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:10.124 { 00:21:10.124 "cntlid": 93, 00:21:10.124 "qid": 0, 00:21:10.124 "state": "enabled", 00:21:10.124 "listen_address": { 00:21:10.124 "trtype": "TCP", 00:21:10.124 "adrfam": "IPv4", 00:21:10.124 "traddr": "10.0.0.2", 00:21:10.124 "trsvcid": "4420" 00:21:10.124 }, 00:21:10.124 "peer_address": { 00:21:10.124 "trtype": "TCP", 00:21:10.124 "adrfam": "IPv4", 00:21:10.124 "traddr": "10.0.0.1", 00:21:10.124 "trsvcid": "41220" 00:21:10.124 }, 00:21:10.124 "auth": { 00:21:10.124 "state": "completed", 00:21:10.124 "digest": "sha384", 00:21:10.124 "dhgroup": "ffdhe8192" 00:21:10.124 } 00:21:10.124 } 00:21:10.124 ]' 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.124 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:10.382 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.382 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.382 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.639 16:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.572 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.829 16:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.760 00:21:12.760 16:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:12.760 16:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:12.760 16:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.017 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:13.018 { 00:21:13.018 "cntlid": 95, 00:21:13.018 "qid": 0, 00:21:13.018 "state": "enabled", 00:21:13.018 "listen_address": { 00:21:13.018 "trtype": "TCP", 00:21:13.018 "adrfam": "IPv4", 00:21:13.018 "traddr": "10.0.0.2", 00:21:13.018 "trsvcid": "4420" 00:21:13.018 }, 00:21:13.018 "peer_address": { 00:21:13.018 "trtype": "TCP", 00:21:13.018 "adrfam": "IPv4", 00:21:13.018 "traddr": "10.0.0.1", 00:21:13.018 "trsvcid": "41246" 00:21:13.018 }, 00:21:13.018 "auth": { 00:21:13.018 "state": "completed", 00:21:13.018 "digest": "sha384", 00:21:13.018 "dhgroup": "ffdhe8192" 00:21:13.018 } 00:21:13.018 } 00:21:13.018 ]' 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.018 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.275 16:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.208 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:14.466 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:14.722 00:21:14.722 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:14.722 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:14.722 16:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:14.980 { 00:21:14.980 "cntlid": 97, 00:21:14.980 "qid": 0, 00:21:14.980 "state": "enabled", 00:21:14.980 "listen_address": { 00:21:14.980 "trtype": "TCP", 00:21:14.980 "adrfam": "IPv4", 00:21:14.980 "traddr": "10.0.0.2", 00:21:14.980 "trsvcid": "4420" 00:21:14.980 }, 00:21:14.980 "peer_address": { 00:21:14.980 "trtype": "TCP", 00:21:14.980 "adrfam": "IPv4", 00:21:14.980 "traddr": "10.0.0.1", 00:21:14.980 "trsvcid": "43276" 00:21:14.980 }, 00:21:14.980 "auth": { 00:21:14.980 "state": "completed", 00:21:14.980 "digest": "sha512", 00:21:14.980 "dhgroup": "null" 00:21:14.980 } 00:21:14.980 } 00:21:14.980 ]' 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.980 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:15.238 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:15.238 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:15.238 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.238 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.238 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.495 16:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:21:16.426 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.426 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.426 16:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.426 16:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.426 16:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.427 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:16.427 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.427 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:16.684 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:16.941 00:21:16.941 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:16.941 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:16.941 16:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.198 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.198 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.198 16:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.198 16:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.198 16:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.198 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:17.198 { 00:21:17.198 "cntlid": 99, 00:21:17.198 "qid": 0, 00:21:17.198 "state": "enabled", 00:21:17.198 "listen_address": { 00:21:17.198 "trtype": "TCP", 00:21:17.198 "adrfam": "IPv4", 00:21:17.198 "traddr": "10.0.0.2", 00:21:17.198 "trsvcid": "4420" 00:21:17.198 }, 00:21:17.199 "peer_address": { 00:21:17.199 "trtype": "TCP", 00:21:17.199 "adrfam": "IPv4", 00:21:17.199 "traddr": "10.0.0.1", 00:21:17.199 "trsvcid": "43308" 00:21:17.199 }, 00:21:17.199 "auth": { 00:21:17.199 "state": "completed", 00:21:17.199 "digest": "sha512", 00:21:17.199 "dhgroup": "null" 00:21:17.199 } 00:21:17.199 } 00:21:17.199 ]' 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.199 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.456 16:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:21:18.431 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.432 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:18.689 16:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:18.947 00:21:19.204 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:19.204 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:19.204 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:19.461 { 00:21:19.461 "cntlid": 101, 00:21:19.461 "qid": 0, 00:21:19.461 "state": "enabled", 00:21:19.461 "listen_address": { 00:21:19.461 "trtype": "TCP", 00:21:19.461 "adrfam": "IPv4", 00:21:19.461 "traddr": "10.0.0.2", 00:21:19.461 "trsvcid": "4420" 00:21:19.461 }, 00:21:19.461 "peer_address": { 00:21:19.461 "trtype": "TCP", 00:21:19.461 "adrfam": "IPv4", 00:21:19.461 "traddr": "10.0.0.1", 00:21:19.461 "trsvcid": "43340" 00:21:19.461 }, 00:21:19.461 "auth": { 00:21:19.461 "state": "completed", 00:21:19.461 "digest": "sha512", 00:21:19.461 "dhgroup": "null" 00:21:19.461 } 00:21:19.461 } 00:21:19.461 ]' 00:21:19.461 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.462 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.719 16:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.651 16:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.908 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:21:20.908 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:20.908 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.908 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.909 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.166 00:21:21.166 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:21.166 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:21.166 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:21.425 { 00:21:21.425 "cntlid": 103, 00:21:21.425 "qid": 0, 00:21:21.425 "state": "enabled", 00:21:21.425 "listen_address": { 00:21:21.425 "trtype": "TCP", 00:21:21.425 "adrfam": "IPv4", 00:21:21.425 "traddr": "10.0.0.2", 00:21:21.425 "trsvcid": "4420" 00:21:21.425 }, 00:21:21.425 "peer_address": { 00:21:21.425 "trtype": "TCP", 00:21:21.425 "adrfam": "IPv4", 00:21:21.425 "traddr": "10.0.0.1", 00:21:21.425 "trsvcid": "43360" 00:21:21.425 }, 00:21:21.425 "auth": { 00:21:21.425 "state": "completed", 00:21:21.425 "digest": "sha512", 00:21:21.425 "dhgroup": "null" 00:21:21.425 } 00:21:21.425 } 00:21:21.425 ]' 00:21:21.425 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.683 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.941 16:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.875 16:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:23.144 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:23.406 00:21:23.406 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:23.406 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:23.406 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:23.664 { 00:21:23.664 "cntlid": 105, 00:21:23.664 "qid": 0, 00:21:23.664 "state": "enabled", 00:21:23.664 "listen_address": { 00:21:23.664 "trtype": "TCP", 00:21:23.664 "adrfam": "IPv4", 00:21:23.664 "traddr": "10.0.0.2", 00:21:23.664 "trsvcid": "4420" 00:21:23.664 }, 00:21:23.664 "peer_address": { 00:21:23.664 "trtype": "TCP", 00:21:23.664 "adrfam": "IPv4", 00:21:23.664 "traddr": "10.0.0.1", 00:21:23.664 "trsvcid": "43386" 00:21:23.664 }, 00:21:23.664 "auth": { 00:21:23.664 "state": "completed", 00:21:23.664 "digest": "sha512", 00:21:23.664 "dhgroup": "ffdhe2048" 00:21:23.664 } 00:21:23.664 } 00:21:23.664 ]' 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.664 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:23.922 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.922 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.922 16:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.181 16:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.114 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:25.373 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:25.630 00:21:25.630 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:25.630 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:25.630 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.888 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.888 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.888 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.888 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.888 16:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.888 16:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.888 { 00:21:25.888 "cntlid": 107, 00:21:25.888 "qid": 0, 00:21:25.888 "state": "enabled", 00:21:25.888 "listen_address": { 00:21:25.888 "trtype": "TCP", 00:21:25.888 "adrfam": "IPv4", 00:21:25.888 "traddr": "10.0.0.2", 00:21:25.888 "trsvcid": "4420" 00:21:25.888 }, 00:21:25.888 "peer_address": { 00:21:25.888 "trtype": "TCP", 00:21:25.888 "adrfam": "IPv4", 00:21:25.888 "traddr": "10.0.0.1", 00:21:25.888 "trsvcid": "56428" 00:21:25.888 }, 00:21:25.888 "auth": { 00:21:25.888 "state": "completed", 00:21:25.888 "digest": "sha512", 00:21:25.888 "dhgroup": "ffdhe2048" 00:21:25.888 } 00:21:25.888 } 00:21:25.888 ]' 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.888 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.145 16:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:21:27.078 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.078 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.078 16:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.078 16:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.078 16:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.078 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:27.079 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.079 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.645 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.902 00:21:27.902 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:27.902 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:27.902 16:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.159 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.159 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.159 16:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.159 16:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.159 16:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.159 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:28.159 { 00:21:28.159 "cntlid": 109, 00:21:28.159 "qid": 0, 00:21:28.159 "state": "enabled", 00:21:28.159 "listen_address": { 00:21:28.159 "trtype": "TCP", 00:21:28.159 "adrfam": "IPv4", 00:21:28.159 "traddr": "10.0.0.2", 00:21:28.159 "trsvcid": "4420" 00:21:28.160 }, 00:21:28.160 "peer_address": { 00:21:28.160 "trtype": "TCP", 00:21:28.160 "adrfam": "IPv4", 00:21:28.160 "traddr": "10.0.0.1", 00:21:28.160 "trsvcid": "56450" 00:21:28.160 }, 00:21:28.160 "auth": { 00:21:28.160 "state": "completed", 00:21:28.160 "digest": "sha512", 00:21:28.160 "dhgroup": "ffdhe2048" 00:21:28.160 } 00:21:28.160 } 00:21:28.160 ]' 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.160 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.417 16:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.350 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.608 16:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.173 00:21:30.173 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:30.173 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:30.173 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:30.431 { 00:21:30.431 "cntlid": 111, 00:21:30.431 "qid": 0, 00:21:30.431 "state": "enabled", 00:21:30.431 "listen_address": { 00:21:30.431 "trtype": "TCP", 00:21:30.431 "adrfam": "IPv4", 00:21:30.431 "traddr": "10.0.0.2", 00:21:30.431 "trsvcid": "4420" 00:21:30.431 }, 00:21:30.431 "peer_address": { 00:21:30.431 "trtype": "TCP", 00:21:30.431 "adrfam": "IPv4", 00:21:30.431 "traddr": "10.0.0.1", 00:21:30.431 "trsvcid": "56474" 00:21:30.431 }, 00:21:30.431 "auth": { 00:21:30.431 "state": "completed", 00:21:30.431 "digest": "sha512", 00:21:30.431 "dhgroup": "ffdhe2048" 00:21:30.431 } 00:21:30.431 } 00:21:30.431 ]' 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.431 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.690 16:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.623 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:31.881 16:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:32.138 00:21:32.138 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:32.138 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:32.138 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:32.396 { 00:21:32.396 "cntlid": 113, 00:21:32.396 "qid": 0, 00:21:32.396 "state": "enabled", 00:21:32.396 "listen_address": { 00:21:32.396 "trtype": "TCP", 00:21:32.396 "adrfam": "IPv4", 00:21:32.396 "traddr": "10.0.0.2", 00:21:32.396 "trsvcid": "4420" 00:21:32.396 }, 00:21:32.396 "peer_address": { 00:21:32.396 "trtype": "TCP", 00:21:32.396 "adrfam": "IPv4", 00:21:32.396 "traddr": "10.0.0.1", 00:21:32.396 "trsvcid": "56510" 00:21:32.396 }, 00:21:32.396 "auth": { 00:21:32.396 "state": "completed", 00:21:32.396 "digest": "sha512", 00:21:32.396 "dhgroup": "ffdhe3072" 00:21:32.396 } 00:21:32.396 } 00:21:32.396 ]' 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.396 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:32.685 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.685 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.685 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.685 16:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.059 16:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:34.059 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:34.316 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.575 16:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:34.833 { 00:21:34.833 "cntlid": 115, 00:21:34.833 "qid": 0, 00:21:34.833 "state": "enabled", 00:21:34.833 "listen_address": { 00:21:34.833 "trtype": "TCP", 00:21:34.833 "adrfam": "IPv4", 00:21:34.833 "traddr": "10.0.0.2", 00:21:34.833 "trsvcid": "4420" 00:21:34.833 }, 00:21:34.833 "peer_address": { 00:21:34.833 "trtype": "TCP", 00:21:34.833 "adrfam": "IPv4", 00:21:34.833 "traddr": "10.0.0.1", 00:21:34.833 "trsvcid": "45468" 00:21:34.833 }, 00:21:34.833 "auth": { 00:21:34.833 "state": "completed", 00:21:34.833 "digest": "sha512", 00:21:34.833 "dhgroup": "ffdhe3072" 00:21:34.833 } 00:21:34.833 } 00:21:34.833 ]' 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.833 16:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.090 16:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.024 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.282 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.540 00:21:36.540 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:36.540 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:36.540 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:36.797 { 00:21:36.797 "cntlid": 117, 00:21:36.797 "qid": 0, 00:21:36.797 "state": "enabled", 00:21:36.797 "listen_address": { 00:21:36.797 "trtype": "TCP", 00:21:36.797 "adrfam": "IPv4", 00:21:36.797 "traddr": "10.0.0.2", 00:21:36.797 "trsvcid": "4420" 00:21:36.797 }, 00:21:36.797 "peer_address": { 00:21:36.797 "trtype": "TCP", 00:21:36.797 "adrfam": "IPv4", 00:21:36.797 "traddr": "10.0.0.1", 00:21:36.797 "trsvcid": "45488" 00:21:36.797 }, 00:21:36.797 "auth": { 00:21:36.797 "state": "completed", 00:21:36.797 "digest": "sha512", 00:21:36.797 "dhgroup": "ffdhe3072" 00:21:36.797 } 00:21:36.797 } 00:21:36.797 ]' 00:21:36.797 16:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:36.797 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.797 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:37.055 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.055 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:37.055 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.055 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.055 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.312 16:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:38.255 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.255 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.255 16:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.255 16:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.255 16:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.255 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:38.256 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.256 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.515 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.773 00:21:38.773 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:38.773 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:38.773 16:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:39.030 { 00:21:39.030 "cntlid": 119, 00:21:39.030 "qid": 0, 00:21:39.030 "state": "enabled", 00:21:39.030 "listen_address": { 00:21:39.030 "trtype": "TCP", 00:21:39.030 "adrfam": "IPv4", 00:21:39.030 "traddr": "10.0.0.2", 00:21:39.030 "trsvcid": "4420" 00:21:39.030 }, 00:21:39.030 "peer_address": { 00:21:39.030 "trtype": "TCP", 00:21:39.030 "adrfam": "IPv4", 00:21:39.030 "traddr": "10.0.0.1", 00:21:39.030 "trsvcid": "45518" 00:21:39.030 }, 00:21:39.030 "auth": { 00:21:39.030 "state": "completed", 00:21:39.030 "digest": "sha512", 00:21:39.030 "dhgroup": "ffdhe3072" 00:21:39.030 } 00:21:39.030 } 00:21:39.030 ]' 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.030 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.288 16:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.221 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:40.478 16:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.044 00:21:41.044 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:41.044 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:41.044 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:41.302 { 00:21:41.302 "cntlid": 121, 00:21:41.302 "qid": 0, 00:21:41.302 "state": "enabled", 00:21:41.302 "listen_address": { 00:21:41.302 "trtype": "TCP", 00:21:41.302 "adrfam": "IPv4", 00:21:41.302 "traddr": "10.0.0.2", 00:21:41.302 "trsvcid": "4420" 00:21:41.302 }, 00:21:41.302 "peer_address": { 00:21:41.302 "trtype": "TCP", 00:21:41.302 "adrfam": "IPv4", 00:21:41.302 "traddr": "10.0.0.1", 00:21:41.302 "trsvcid": "45534" 00:21:41.302 }, 00:21:41.302 "auth": { 00:21:41.302 "state": "completed", 00:21:41.302 "digest": "sha512", 00:21:41.302 "dhgroup": "ffdhe4096" 00:21:41.302 } 00:21:41.302 } 00:21:41.302 ]' 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.302 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.560 16:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.494 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:42.751 16:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:43.316 00:21:43.316 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:43.316 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.316 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:43.574 { 00:21:43.574 "cntlid": 123, 00:21:43.574 "qid": 0, 00:21:43.574 "state": "enabled", 00:21:43.574 "listen_address": { 00:21:43.574 "trtype": "TCP", 00:21:43.574 "adrfam": "IPv4", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "trsvcid": "4420" 00:21:43.574 }, 00:21:43.574 "peer_address": { 00:21:43.574 "trtype": "TCP", 00:21:43.574 "adrfam": "IPv4", 00:21:43.574 "traddr": "10.0.0.1", 00:21:43.574 "trsvcid": "45558" 00:21:43.574 }, 00:21:43.574 "auth": { 00:21:43.574 "state": "completed", 00:21:43.574 "digest": "sha512", 00:21:43.574 "dhgroup": "ffdhe4096" 00:21:43.574 } 00:21:43.574 } 00:21:43.574 ]' 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.574 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.831 16:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.763 16:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:45.021 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:45.587 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:45.587 { 00:21:45.587 "cntlid": 125, 00:21:45.587 "qid": 0, 00:21:45.587 "state": "enabled", 00:21:45.587 "listen_address": { 00:21:45.587 "trtype": "TCP", 00:21:45.587 "adrfam": "IPv4", 00:21:45.587 "traddr": "10.0.0.2", 00:21:45.587 "trsvcid": "4420" 00:21:45.587 }, 00:21:45.587 "peer_address": { 00:21:45.587 "trtype": "TCP", 00:21:45.587 "adrfam": "IPv4", 00:21:45.587 "traddr": "10.0.0.1", 00:21:45.587 "trsvcid": "47712" 00:21:45.587 }, 00:21:45.587 "auth": { 00:21:45.587 "state": "completed", 00:21:45.587 "digest": "sha512", 00:21:45.587 "dhgroup": "ffdhe4096" 00:21:45.587 } 00:21:45.587 } 00:21:45.587 ]' 00:21:45.587 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.844 16:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.102 16:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.034 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.292 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.870 00:21:47.871 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:47.871 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:47.871 16:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:48.177 { 00:21:48.177 "cntlid": 127, 00:21:48.177 "qid": 0, 00:21:48.177 "state": "enabled", 00:21:48.177 "listen_address": { 00:21:48.177 "trtype": "TCP", 00:21:48.177 "adrfam": "IPv4", 00:21:48.177 "traddr": "10.0.0.2", 00:21:48.177 "trsvcid": "4420" 00:21:48.177 }, 00:21:48.177 "peer_address": { 00:21:48.177 "trtype": "TCP", 00:21:48.177 "adrfam": "IPv4", 00:21:48.177 "traddr": "10.0.0.1", 00:21:48.177 "trsvcid": "47724" 00:21:48.177 }, 00:21:48.177 "auth": { 00:21:48.177 "state": "completed", 00:21:48.177 "digest": "sha512", 00:21:48.177 "dhgroup": "ffdhe4096" 00:21:48.177 } 00:21:48.177 } 00:21:48.177 ]' 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.177 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.434 16:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.365 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:49.622 16:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:50.186 00:21:50.186 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:50.186 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:50.186 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:50.443 { 00:21:50.443 "cntlid": 129, 00:21:50.443 "qid": 0, 00:21:50.443 "state": "enabled", 00:21:50.443 "listen_address": { 00:21:50.443 "trtype": "TCP", 00:21:50.443 "adrfam": "IPv4", 00:21:50.443 "traddr": "10.0.0.2", 00:21:50.443 "trsvcid": "4420" 00:21:50.443 }, 00:21:50.443 "peer_address": { 00:21:50.443 "trtype": "TCP", 00:21:50.443 "adrfam": "IPv4", 00:21:50.443 "traddr": "10.0.0.1", 00:21:50.443 "trsvcid": "47758" 00:21:50.443 }, 00:21:50.443 "auth": { 00:21:50.443 "state": "completed", 00:21:50.443 "digest": "sha512", 00:21:50.443 "dhgroup": "ffdhe6144" 00:21:50.443 } 00:21:50.443 } 00:21:50.443 ]' 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.443 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.907 16:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.836 16:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:52.094 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:52.658 00:21:52.658 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:52.658 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:52.658 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:52.916 { 00:21:52.916 "cntlid": 131, 00:21:52.916 "qid": 0, 00:21:52.916 "state": "enabled", 00:21:52.916 "listen_address": { 00:21:52.916 "trtype": "TCP", 00:21:52.916 "adrfam": "IPv4", 00:21:52.916 "traddr": "10.0.0.2", 00:21:52.916 "trsvcid": "4420" 00:21:52.916 }, 00:21:52.916 "peer_address": { 00:21:52.916 "trtype": "TCP", 00:21:52.916 "adrfam": "IPv4", 00:21:52.916 "traddr": "10.0.0.1", 00:21:52.916 "trsvcid": "47788" 00:21:52.916 }, 00:21:52.916 "auth": { 00:21:52.916 "state": "completed", 00:21:52.916 "digest": "sha512", 00:21:52.916 "dhgroup": "ffdhe6144" 00:21:52.916 } 00:21:52.916 } 00:21:52.916 ]' 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.916 16:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:52.916 16:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.916 16:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:52.916 16:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.916 16:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.916 16:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.174 16:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.107 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:54.365 16:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:54.930 00:21:54.930 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:54.930 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:54.930 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:55.188 { 00:21:55.188 "cntlid": 133, 00:21:55.188 "qid": 0, 00:21:55.188 "state": "enabled", 00:21:55.188 "listen_address": { 00:21:55.188 "trtype": "TCP", 00:21:55.188 "adrfam": "IPv4", 00:21:55.188 "traddr": "10.0.0.2", 00:21:55.188 "trsvcid": "4420" 00:21:55.188 }, 00:21:55.188 "peer_address": { 00:21:55.188 "trtype": "TCP", 00:21:55.188 "adrfam": "IPv4", 00:21:55.188 "traddr": "10.0.0.1", 00:21:55.188 "trsvcid": "42820" 00:21:55.188 }, 00:21:55.188 "auth": { 00:21:55.188 "state": "completed", 00:21:55.188 "digest": "sha512", 00:21:55.188 "dhgroup": "ffdhe6144" 00:21:55.188 } 00:21:55.188 } 00:21:55.188 ]' 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.188 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:55.446 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.446 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.446 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.703 16:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.632 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.889 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.890 16:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.454 00:21:57.454 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:57.454 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:57.454 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.711 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.711 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:57.712 { 00:21:57.712 "cntlid": 135, 00:21:57.712 "qid": 0, 00:21:57.712 "state": "enabled", 00:21:57.712 "listen_address": { 00:21:57.712 "trtype": "TCP", 00:21:57.712 "adrfam": "IPv4", 00:21:57.712 "traddr": "10.0.0.2", 00:21:57.712 "trsvcid": "4420" 00:21:57.712 }, 00:21:57.712 "peer_address": { 00:21:57.712 "trtype": "TCP", 00:21:57.712 "adrfam": "IPv4", 00:21:57.712 "traddr": "10.0.0.1", 00:21:57.712 "trsvcid": "42834" 00:21:57.712 }, 00:21:57.712 "auth": { 00:21:57.712 "state": "completed", 00:21:57.712 "digest": "sha512", 00:21:57.712 "dhgroup": "ffdhe6144" 00:21:57.712 } 00:21:57.712 } 00:21:57.712 ]' 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.712 16:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.969 16:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:58.902 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:59.159 16:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:00.091 00:22:00.091 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:00.091 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.091 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:00.348 { 00:22:00.348 "cntlid": 137, 00:22:00.348 "qid": 0, 00:22:00.348 "state": "enabled", 00:22:00.348 "listen_address": { 00:22:00.348 "trtype": "TCP", 00:22:00.348 "adrfam": "IPv4", 00:22:00.348 "traddr": "10.0.0.2", 00:22:00.348 "trsvcid": "4420" 00:22:00.348 }, 00:22:00.348 "peer_address": { 00:22:00.348 "trtype": "TCP", 00:22:00.348 "adrfam": "IPv4", 00:22:00.348 "traddr": "10.0.0.1", 00:22:00.348 "trsvcid": "42876" 00:22:00.348 }, 00:22:00.348 "auth": { 00:22:00.348 "state": "completed", 00:22:00.348 "digest": "sha512", 00:22:00.348 "dhgroup": "ffdhe8192" 00:22:00.348 } 00:22:00.348 } 00:22:00.348 ]' 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.348 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:00.605 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.605 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.605 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.863 16:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.794 16:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.051 16:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.052 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:02.052 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:03.003 00:22:03.004 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:03.004 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:03.004 16:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:03.004 { 00:22:03.004 "cntlid": 139, 00:22:03.004 "qid": 0, 00:22:03.004 "state": "enabled", 00:22:03.004 "listen_address": { 00:22:03.004 "trtype": "TCP", 00:22:03.004 "adrfam": "IPv4", 00:22:03.004 "traddr": "10.0.0.2", 00:22:03.004 "trsvcid": "4420" 00:22:03.004 }, 00:22:03.004 "peer_address": { 00:22:03.004 "trtype": "TCP", 00:22:03.004 "adrfam": "IPv4", 00:22:03.004 "traddr": "10.0.0.1", 00:22:03.004 "trsvcid": "42896" 00:22:03.004 }, 00:22:03.004 "auth": { 00:22:03.004 "state": "completed", 00:22:03.004 "digest": "sha512", 00:22:03.004 "dhgroup": "ffdhe8192" 00:22:03.004 } 00:22:03.004 } 00:22:03.004 ]' 00:22:03.004 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.290 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.558 16:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OWMxNjRkYTYxZTUwM2RkNjA4NjZlZjIzMDM0NTcxMmTQFMff: 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.491 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.748 16:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.749 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:04.749 16:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.680 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.680 16:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.681 16:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.681 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:05.681 { 00:22:05.681 "cntlid": 141, 00:22:05.681 "qid": 0, 00:22:05.681 "state": "enabled", 00:22:05.681 "listen_address": { 00:22:05.681 "trtype": "TCP", 00:22:05.681 "adrfam": "IPv4", 00:22:05.681 "traddr": "10.0.0.2", 00:22:05.681 "trsvcid": "4420" 00:22:05.681 }, 00:22:05.681 "peer_address": { 00:22:05.681 "trtype": "TCP", 00:22:05.681 "adrfam": "IPv4", 00:22:05.681 "traddr": "10.0.0.1", 00:22:05.681 "trsvcid": "51092" 00:22:05.681 }, 00:22:05.681 "auth": { 00:22:05.681 "state": "completed", 00:22:05.681 "digest": "sha512", 00:22:05.681 "dhgroup": "ffdhe8192" 00:22:05.681 } 00:22:05.681 } 00:22:05.681 ]' 00:22:05.681 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:05.681 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.681 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:05.938 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.938 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:05.938 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.938 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.938 16:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.195 16:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NjhkNDgyYWUzOTNmMTNhYWI2ZDc3YWMzZjlkMDMyOTUyZDQ5YTZkNGMzZjkxYTI04QQXNw==: 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.127 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.385 16:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.316 00:22:08.316 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:08.316 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:08.316 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:08.574 { 00:22:08.574 "cntlid": 143, 00:22:08.574 "qid": 0, 00:22:08.574 "state": "enabled", 00:22:08.574 "listen_address": { 00:22:08.574 "trtype": "TCP", 00:22:08.574 "adrfam": "IPv4", 00:22:08.574 "traddr": "10.0.0.2", 00:22:08.574 "trsvcid": "4420" 00:22:08.574 }, 00:22:08.574 "peer_address": { 00:22:08.574 "trtype": "TCP", 00:22:08.574 "adrfam": "IPv4", 00:22:08.574 "traddr": "10.0.0.1", 00:22:08.574 "trsvcid": "51110" 00:22:08.574 }, 00:22:08.574 "auth": { 00:22:08.574 "state": "completed", 00:22:08.574 "digest": "sha512", 00:22:08.574 "dhgroup": "ffdhe8192" 00:22:08.574 } 00:22:08.574 } 00:22:08.574 ]' 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.574 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.835 16:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:Y2VjMTczZTQzNzMxN2U5NzIxZDY2ZWViNjEwYzkxYzI2NzFiY2RlYTVmYzU2NjA1YzgzMDY1MzM2MjIzM2FjZEu7iDU=: 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.768 16:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:10.026 16:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:10.958 00:22:10.958 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:10.958 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.958 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:11.216 { 00:22:11.216 "cntlid": 145, 00:22:11.216 "qid": 0, 00:22:11.216 "state": "enabled", 00:22:11.216 "listen_address": { 00:22:11.216 "trtype": "TCP", 00:22:11.216 "adrfam": "IPv4", 00:22:11.216 "traddr": "10.0.0.2", 00:22:11.216 "trsvcid": "4420" 00:22:11.216 }, 00:22:11.216 "peer_address": { 00:22:11.216 "trtype": "TCP", 00:22:11.216 "adrfam": "IPv4", 00:22:11.216 "traddr": "10.0.0.1", 00:22:11.216 "trsvcid": "51140" 00:22:11.216 }, 00:22:11.216 "auth": { 00:22:11.216 "state": "completed", 00:22:11.216 "digest": "sha512", 00:22:11.216 "dhgroup": "ffdhe8192" 00:22:11.216 } 00:22:11.216 } 00:22:11.216 ]' 00:22:11.216 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.474 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.731 16:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjdlODc0MDMzMzUyOWQ5ZTk4MjhjYzkxNTZlZmUyOTVmODRlOWNiNzQ5M2M1MGM2kpOIlg==: 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:12.663 16:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:13.596 request: 00:22:13.596 { 00:22:13.596 "name": "nvme0", 00:22:13.596 "trtype": "tcp", 00:22:13.596 "traddr": "10.0.0.2", 00:22:13.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:13.596 "adrfam": "ipv4", 00:22:13.596 "trsvcid": "4420", 00:22:13.596 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.596 "dhchap_key": "key2", 00:22:13.596 "method": "bdev_nvme_attach_controller", 00:22:13.596 "req_id": 1 00:22:13.596 } 00:22:13.596 Got JSON-RPC error response 00:22:13.596 response: 00:22:13.596 { 00:22:13.596 "code": -32602, 00:22:13.596 "message": "Invalid parameters" 00:22:13.596 } 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1785733 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1785733 ']' 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1785733 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1785733 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1785733' 00:22:13.596 killing process with pid 1785733 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1785733 00:22:13.596 16:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1785733 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.854 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.111 rmmod nvme_tcp 00:22:14.111 rmmod nvme_fabrics 00:22:14.111 rmmod nvme_keyring 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1785700 ']' 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1785700 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1785700 ']' 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1785700 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1785700 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1785700' 00:22:14.111 killing process with pid 1785700 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1785700 00:22:14.111 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1785700 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.367 16:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.320 16:43:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.320 16:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vzx /tmp/spdk.key-sha256.kAF /tmp/spdk.key-sha384.NYz /tmp/spdk.key-sha512.D67 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:16.320 00:22:16.320 real 2m58.035s 00:22:16.320 user 6m53.422s 00:22:16.320 sys 0m21.191s 00:22:16.320 16:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:16.320 16:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.320 ************************************ 00:22:16.320 END TEST nvmf_auth_target 00:22:16.320 ************************************ 00:22:16.320 16:43:23 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:16.320 16:43:23 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:16.320 16:43:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:16.320 16:43:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:16.320 16:43:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.320 ************************************ 00:22:16.320 START TEST nvmf_bdevio_no_huge 00:22:16.320 ************************************ 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:16.320 * Looking for test storage... 00:22:16.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.320 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.577 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.578 16:43:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:19.106 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:19.106 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.106 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:19.107 Found net devices under 0000:09:00.0: cvl_0_0 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:19.107 Found net devices under 0000:09:00.1: cvl_0_1 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.107 16:43:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:22:19.107 00:22:19.107 --- 10.0.0.2 ping statistics --- 00:22:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.107 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:19.107 00:22:19.107 --- 10.0.0.1 ping statistics --- 00:22:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.107 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1809753 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1809753 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1809753 ']' 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:19.107 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.107 [2024-05-15 16:43:26.201516] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:19.107 [2024-05-15 16:43:26.201615] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:19.107 [2024-05-15 16:43:26.279870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.365 [2024-05-15 16:43:26.362626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.365 [2024-05-15 16:43:26.362684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.365 [2024-05-15 16:43:26.362697] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.365 [2024-05-15 16:43:26.362709] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.365 [2024-05-15 16:43:26.362719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.365 [2024-05-15 16:43:26.362839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.365 [2024-05-15 16:43:26.362902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:19.365 [2024-05-15 16:43:26.362968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:19.365 [2024-05-15 16:43:26.362970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.365 [2024-05-15 16:43:26.483706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.365 Malloc0 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.365 [2024-05-15 16:43:26.521441] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:19.365 [2024-05-15 16:43:26.521719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:19.365 { 00:22:19.365 "params": { 00:22:19.365 "name": "Nvme$subsystem", 00:22:19.365 "trtype": "$TEST_TRANSPORT", 00:22:19.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.365 "adrfam": "ipv4", 00:22:19.365 "trsvcid": "$NVMF_PORT", 00:22:19.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.365 "hdgst": ${hdgst:-false}, 00:22:19.365 "ddgst": ${ddgst:-false} 00:22:19.365 }, 00:22:19.365 "method": "bdev_nvme_attach_controller" 00:22:19.365 } 00:22:19.365 EOF 00:22:19.365 )") 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:19.365 16:43:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:19.365 "params": { 00:22:19.365 "name": "Nvme1", 00:22:19.365 "trtype": "tcp", 00:22:19.365 "traddr": "10.0.0.2", 00:22:19.365 "adrfam": "ipv4", 00:22:19.365 "trsvcid": "4420", 00:22:19.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.365 "hdgst": false, 00:22:19.365 "ddgst": false 00:22:19.365 }, 00:22:19.365 "method": "bdev_nvme_attach_controller" 00:22:19.365 }' 00:22:19.365 [2024-05-15 16:43:26.562074] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:19.365 [2024-05-15 16:43:26.562167] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1809781 ] 00:22:19.623 [2024-05-15 16:43:26.631954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:19.623 [2024-05-15 16:43:26.715708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.623 [2024-05-15 16:43:26.715769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.623 [2024-05-15 16:43:26.715773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.887 I/O targets: 00:22:19.887 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:19.887 00:22:19.887 00:22:19.887 CUnit - A unit testing framework for C - Version 2.1-3 00:22:19.887 http://cunit.sourceforge.net/ 00:22:19.887 00:22:19.887 00:22:19.887 Suite: bdevio tests on: Nvme1n1 00:22:19.887 Test: blockdev write read block ...passed 00:22:19.887 Test: blockdev write zeroes read block ...passed 00:22:19.887 Test: blockdev write zeroes read no split ...passed 00:22:19.887 Test: blockdev write zeroes read split ...passed 00:22:19.887 Test: blockdev write zeroes read split partial ...passed 00:22:19.887 Test: blockdev reset ...[2024-05-15 16:43:26.991224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.887 [2024-05-15 16:43:26.991326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6d160 (9): Bad file descriptor 00:22:19.887 [2024-05-15 16:43:27.005746] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.887 passed 00:22:19.887 Test: blockdev write read 8 blocks ...passed 00:22:19.888 Test: blockdev write read size > 128k ...passed 00:22:19.888 Test: blockdev write read invalid size ...passed 00:22:19.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:19.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:19.888 Test: blockdev write read max offset ...passed 00:22:20.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:20.173 Test: blockdev writev readv 8 blocks ...passed 00:22:20.173 Test: blockdev writev readv 30 x 1block ...passed 00:22:20.173 Test: blockdev writev readv block ...passed 00:22:20.173 Test: blockdev writev readv size > 128k ...passed 00:22:20.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:20.173 Test: blockdev comparev and writev ...[2024-05-15 16:43:27.218953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.218991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.219016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.219033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.219439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.219464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.219486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.219503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.219855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.219881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.219904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.219921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.220287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.220312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.220335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.173 [2024-05-15 16:43:27.220352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:20.173 passed 00:22:20.173 Test: blockdev nvme passthru rw ...passed 00:22:20.173 Test: blockdev nvme passthru vendor specific ...[2024-05-15 16:43:27.302546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.173 [2024-05-15 16:43:27.302574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.302749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.173 [2024-05-15 16:43:27.302773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.302947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.173 [2024-05-15 16:43:27.302970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:20.173 [2024-05-15 16:43:27.303144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.173 [2024-05-15 16:43:27.303173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:20.173 passed 00:22:20.173 Test: blockdev nvme admin passthru ...passed 00:22:20.173 Test: blockdev copy ...passed 00:22:20.173 00:22:20.173 Run Summary: Type Total Ran Passed Failed Inactive 00:22:20.173 suites 1 1 n/a 0 0 00:22:20.173 tests 23 23 23 0 0 00:22:20.173 asserts 152 152 152 0 n/a 00:22:20.173 00:22:20.173 Elapsed time = 0.991 seconds 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.737 rmmod nvme_tcp 00:22:20.737 rmmod nvme_fabrics 00:22:20.737 rmmod nvme_keyring 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.737 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1809753 ']' 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1809753 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1809753 ']' 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1809753 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1809753 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1809753' 00:22:20.738 killing process with pid 1809753 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1809753 00:22:20.738 [2024-05-15 16:43:27.764310] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:20.738 16:43:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1809753 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.995 16:43:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.521 16:43:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.521 00:22:23.521 real 0m6.688s 00:22:23.521 user 0m9.421s 00:22:23.521 sys 0m2.795s 00:22:23.521 16:43:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:23.521 16:43:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.521 ************************************ 00:22:23.521 END TEST nvmf_bdevio_no_huge 00:22:23.521 ************************************ 00:22:23.521 16:43:30 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:23.521 16:43:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:23.521 16:43:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:23.521 16:43:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.521 ************************************ 00:22:23.521 START TEST nvmf_tls 00:22:23.521 ************************************ 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:23.521 * Looking for test storage... 00:22:23.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.521 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.522 16:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.047 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:26.048 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:26.048 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:26.048 Found net devices under 0000:09:00.0: cvl_0_0 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:26.048 Found net devices under 0000:09:00.1: cvl_0_1 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:22:26.048 00:22:26.048 --- 10.0.0.2 ping statistics --- 00:22:26.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.048 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:26.048 00:22:26.048 --- 10.0.0.1 ping statistics --- 00:22:26.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.048 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1812256 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1812256 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1812256 ']' 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.048 16:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.048 [2024-05-15 16:43:32.975498] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:26.048 [2024-05-15 16:43:32.975598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.048 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.048 [2024-05-15 16:43:33.056311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.048 [2024-05-15 16:43:33.141677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.048 [2024-05-15 16:43:33.141741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.048 [2024-05-15 16:43:33.141768] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.048 [2024-05-15 16:43:33.141783] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.048 [2024-05-15 16:43:33.141796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.048 [2024-05-15 16:43:33.141839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:26.048 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:26.305 true 00:22:26.305 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.305 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:26.563 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:26.563 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:26.563 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:26.820 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.820 16:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:27.078 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:27.078 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:27.078 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:27.336 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.336 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:27.593 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:27.593 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:27.593 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.593 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:27.851 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:27.851 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:27.851 16:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:28.109 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.109 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:28.366 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:28.366 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:28.366 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:28.624 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.624 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:28.881 16:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.H4vn06hLG5 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.tv9o6ZveVB 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.H4vn06hLG5 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tv9o6ZveVB 00:22:28.881 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:29.139 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:29.705 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.H4vn06hLG5 00:22:29.705 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.H4vn06hLG5 00:22:29.705 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.705 [2024-05-15 16:43:36.895868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.705 16:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.964 16:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.221 [2024-05-15 16:43:37.413257] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:30.221 [2024-05-15 16:43:37.413370] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.221 [2024-05-15 16:43:37.413635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.221 16:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.784 malloc0 00:22:30.784 16:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:31.042 16:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H4vn06hLG5 00:22:31.042 [2024-05-15 16:43:38.247973] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:31.042 16:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.H4vn06hLG5 00:22:31.300 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.264 Initializing NVMe Controllers 00:22:41.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.264 Initialization complete. Launching workers. 00:22:41.264 ======================================================== 00:22:41.264 Latency(us) 00:22:41.264 Device Information : IOPS MiB/s Average min max 00:22:41.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7761.29 30.32 8248.78 1291.25 9248.86 00:22:41.264 ======================================================== 00:22:41.264 Total : 7761.29 30.32 8248.78 1291.25 9248.86 00:22:41.264 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4vn06hLG5 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H4vn06hLG5' 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1814027 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1814027 /var/tmp/bdevperf.sock 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1814027 ']' 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:41.264 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.264 [2024-05-15 16:43:48.403071] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:41.264 [2024-05-15 16:43:48.403146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814027 ] 00:22:41.265 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.265 [2024-05-15 16:43:48.469089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.522 [2024-05-15 16:43:48.550995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.522 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:41.523 16:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:41.523 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H4vn06hLG5 00:22:41.782 [2024-05-15 16:43:48.886844] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.782 [2024-05-15 16:43:48.886951] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:41.782 TLSTESTn1 00:22:41.782 16:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.039 Running I/O for 10 seconds... 00:22:52.081 00:22:52.081 Latency(us) 00:22:52.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.081 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.081 Verification LBA range: start 0x0 length 0x2000 00:22:52.081 TLSTESTn1 : 10.03 3572.51 13.96 0.00 0.00 35750.67 6043.88 53205.52 00:22:52.081 =================================================================================================================== 00:22:52.081 Total : 3572.51 13.96 0.00 0.00 35750.67 6043.88 53205.52 00:22:52.081 0 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1814027 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1814027 ']' 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1814027 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1814027 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1814027' 00:22:52.081 killing process with pid 1814027 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1814027 00:22:52.081 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.081 00:22:52.081 Latency(us) 00:22:52.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.081 =================================================================================================================== 00:22:52.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.081 [2024-05-15 16:43:59.201398] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.081 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1814027 00:22:52.338 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tv9o6ZveVB 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tv9o6ZveVB 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tv9o6ZveVB 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tv9o6ZveVB' 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1815341 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1815341 /var/tmp/bdevperf.sock 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1815341 ']' 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.339 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 [2024-05-15 16:43:59.473458] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:52.339 [2024-05-15 16:43:59.473548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815341 ] 00:22:52.339 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.339 [2024-05-15 16:43:59.543242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.597 [2024-05-15 16:43:59.622313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.597 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.597 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:52.597 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tv9o6ZveVB 00:22:52.855 [2024-05-15 16:43:59.937896] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.855 [2024-05-15 16:43:59.938014] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:52.855 [2024-05-15 16:43:59.943347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.855 [2024-05-15 16:43:59.943823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9700 (107): Transport endpoint is not connected 00:22:52.855 [2024-05-15 16:43:59.944812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9700 (9): Bad file descriptor 00:22:52.855 [2024-05-15 16:43:59.945811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.855 [2024-05-15 16:43:59.945832] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.855 [2024-05-15 16:43:59.945859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.855 request: 00:22:52.855 { 00:22:52.855 "name": "TLSTEST", 00:22:52.855 "trtype": "tcp", 00:22:52.855 "traddr": "10.0.0.2", 00:22:52.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.855 "adrfam": "ipv4", 00:22:52.855 "trsvcid": "4420", 00:22:52.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.855 "psk": "/tmp/tmp.tv9o6ZveVB", 00:22:52.855 "method": "bdev_nvme_attach_controller", 00:22:52.855 "req_id": 1 00:22:52.855 } 00:22:52.855 Got JSON-RPC error response 00:22:52.855 response: 00:22:52.855 { 00:22:52.855 "code": -32602, 00:22:52.855 "message": "Invalid parameters" 00:22:52.855 } 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1815341 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1815341 ']' 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1815341 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1815341 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1815341' 00:22:52.855 killing process with pid 1815341 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1815341 00:22:52.855 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.855 00:22:52.855 Latency(us) 00:22:52.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.855 =================================================================================================================== 00:22:52.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.855 [2024-05-15 16:43:59.992159] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.855 16:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1815341 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H4vn06hLG5 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H4vn06hLG5 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H4vn06hLG5 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H4vn06hLG5' 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1815478 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1815478 /var/tmp/bdevperf.sock 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1815478 ']' 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.113 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.113 [2024-05-15 16:44:00.239771] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:53.113 [2024-05-15 16:44:00.239872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815478 ] 00:22:53.113 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.113 [2024-05-15 16:44:00.311101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.371 [2024-05-15 16:44:00.399885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.371 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.371 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.371 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.H4vn06hLG5 00:22:53.629 [2024-05-15 16:44:00.732852] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.629 [2024-05-15 16:44:00.732958] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.629 [2024-05-15 16:44:00.743994] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.629 [2024-05-15 16:44:00.744028] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.629 [2024-05-15 16:44:00.744082] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.629 [2024-05-15 16:44:00.744795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341700 (107): Transport endpoint is not connected 00:22:53.629 [2024-05-15 16:44:00.745787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341700 (9): Bad file descriptor 00:22:53.629 [2024-05-15 16:44:00.746786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.629 [2024-05-15 16:44:00.746806] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.629 [2024-05-15 16:44:00.746822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.629 request: 00:22:53.629 { 00:22:53.629 "name": "TLSTEST", 00:22:53.629 "trtype": "tcp", 00:22:53.629 "traddr": "10.0.0.2", 00:22:53.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.629 "adrfam": "ipv4", 00:22:53.629 "trsvcid": "4420", 00:22:53.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.629 "psk": "/tmp/tmp.H4vn06hLG5", 00:22:53.629 "method": "bdev_nvme_attach_controller", 00:22:53.629 "req_id": 1 00:22:53.629 } 00:22:53.629 Got JSON-RPC error response 00:22:53.629 response: 00:22:53.629 { 00:22:53.629 "code": -32602, 00:22:53.629 "message": "Invalid parameters" 00:22:53.629 } 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1815478 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1815478 ']' 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1815478 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1815478 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1815478' 00:22:53.629 killing process with pid 1815478 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1815478 00:22:53.629 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.629 00:22:53.629 Latency(us) 00:22:53.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.629 =================================================================================================================== 00:22:53.629 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.629 [2024-05-15 16:44:00.795795] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.629 16:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1815478 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4vn06hLG5 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4vn06hLG5 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4vn06hLG5 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H4vn06hLG5' 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1815499 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1815499 /var/tmp/bdevperf.sock 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1815499 ']' 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.888 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.888 [2024-05-15 16:44:01.060888] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:53.888 [2024-05-15 16:44:01.060963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815499 ] 00:22:53.888 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.146 [2024-05-15 16:44:01.132624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.146 [2024-05-15 16:44:01.212902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.146 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.146 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:54.146 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H4vn06hLG5 00:22:54.405 [2024-05-15 16:44:01.547600] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.405 [2024-05-15 16:44:01.547717] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.405 [2024-05-15 16:44:01.556100] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.405 [2024-05-15 16:44:01.556133] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.405 [2024-05-15 16:44:01.556187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.405 [2024-05-15 16:44:01.556629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfd700 (107): Transport endpoint is not connected 00:22:54.405 [2024-05-15 16:44:01.557618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfd700 (9): Bad file descriptor 00:22:54.405 [2024-05-15 16:44:01.558618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.405 [2024-05-15 16:44:01.558639] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.405 [2024-05-15 16:44:01.558655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.405 request: 00:22:54.405 { 00:22:54.405 "name": "TLSTEST", 00:22:54.405 "trtype": "tcp", 00:22:54.405 "traddr": "10.0.0.2", 00:22:54.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.405 "adrfam": "ipv4", 00:22:54.405 "trsvcid": "4420", 00:22:54.405 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.405 "psk": "/tmp/tmp.H4vn06hLG5", 00:22:54.405 "method": "bdev_nvme_attach_controller", 00:22:54.405 "req_id": 1 00:22:54.405 } 00:22:54.405 Got JSON-RPC error response 00:22:54.405 response: 00:22:54.405 { 00:22:54.405 "code": -32602, 00:22:54.405 "message": "Invalid parameters" 00:22:54.405 } 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1815499 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1815499 ']' 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1815499 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1815499 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1815499' 00:22:54.405 killing process with pid 1815499 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1815499 00:22:54.405 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.405 00:22:54.405 Latency(us) 00:22:54.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.405 =================================================================================================================== 00:22:54.405 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.405 [2024-05-15 16:44:01.607649] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:54.405 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1815499 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1815638 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1815638 /var/tmp/bdevperf.sock 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1815638 ']' 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.663 16:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.663 [2024-05-15 16:44:01.875834] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:54.663 [2024-05-15 16:44:01.875920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815638 ] 00:22:54.922 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.922 [2024-05-15 16:44:01.944251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.922 [2024-05-15 16:44:02.026233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.922 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.922 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:54.922 16:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:55.488 [2024-05-15 16:44:02.408515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:55.488 [2024-05-15 16:44:02.409798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147cdd0 (9): Bad file descriptor 00:22:55.488 [2024-05-15 16:44:02.410792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.488 [2024-05-15 16:44:02.410813] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:55.488 [2024-05-15 16:44:02.410829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.488 request: 00:22:55.488 { 00:22:55.488 "name": "TLSTEST", 00:22:55.488 "trtype": "tcp", 00:22:55.488 "traddr": "10.0.0.2", 00:22:55.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.488 "adrfam": "ipv4", 00:22:55.488 "trsvcid": "4420", 00:22:55.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.488 "method": "bdev_nvme_attach_controller", 00:22:55.488 "req_id": 1 00:22:55.488 } 00:22:55.488 Got JSON-RPC error response 00:22:55.488 response: 00:22:55.488 { 00:22:55.488 "code": -32602, 00:22:55.488 "message": "Invalid parameters" 00:22:55.488 } 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1815638 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1815638 ']' 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1815638 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1815638 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1815638' 00:22:55.488 killing process with pid 1815638 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1815638 00:22:55.488 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.488 00:22:55.488 Latency(us) 00:22:55.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.488 =================================================================================================================== 00:22:55.488 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1815638 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1812256 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1812256 ']' 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1812256 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1812256 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1812256' 00:22:55.488 killing process with pid 1812256 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1812256 00:22:55.488 [2024-05-15 16:44:02.699396] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:55.488 [2024-05-15 16:44:02.699457] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:55.488 16:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1812256 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:55.747 16:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ySg2DgbvoM 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ySg2DgbvoM 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1815784 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1815784 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1815784 ']' 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:56.005 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.005 [2024-05-15 16:44:03.063794] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:56.005 [2024-05-15 16:44:03.063876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.005 [2024-05-15 16:44:03.141616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.005 [2024-05-15 16:44:03.226441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.005 [2024-05-15 16:44:03.226519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.005 [2024-05-15 16:44:03.226546] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.005 [2024-05-15 16:44:03.226559] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.005 [2024-05-15 16:44:03.226571] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.005 [2024-05-15 16:44:03.226611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ySg2DgbvoM 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ySg2DgbvoM 00:22:56.263 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.520 [2024-05-15 16:44:03.594457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.520 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:56.777 16:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:57.035 [2024-05-15 16:44:04.079732] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:57.035 [2024-05-15 16:44:04.079833] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.035 [2024-05-15 16:44:04.080071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.035 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:57.292 malloc0 00:22:57.292 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:57.550 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:22:57.808 [2024-05-15 16:44:04.890324] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ySg2DgbvoM 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ySg2DgbvoM' 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1816070 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1816070 /var/tmp/bdevperf.sock 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1816070 ']' 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.808 16:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.808 [2024-05-15 16:44:04.948281] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:22:57.808 [2024-05-15 16:44:04.948357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816070 ] 00:22:57.808 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.808 [2024-05-15 16:44:05.013382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.066 [2024-05-15 16:44:05.093742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.066 16:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.066 16:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:58.066 16:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:22:58.324 [2024-05-15 16:44:05.416241] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.324 [2024-05-15 16:44:05.416334] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:58.324 TLSTESTn1 00:22:58.324 16:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.582 Running I/O for 10 seconds... 00:23:08.542 00:23:08.542 Latency(us) 00:23:08.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.542 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.542 Verification LBA range: start 0x0 length 0x2000 00:23:08.542 TLSTESTn1 : 10.05 2901.82 11.34 0.00 0.00 43998.90 7815.77 47962.64 00:23:08.542 =================================================================================================================== 00:23:08.542 Total : 2901.82 11.34 0.00 0.00 43998.90 7815.77 47962.64 00:23:08.542 0 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1816070 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1816070 ']' 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1816070 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1816070 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1816070' 00:23:08.542 killing process with pid 1816070 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1816070 00:23:08.542 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.542 00:23:08.542 Latency(us) 00:23:08.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.542 =================================================================================================================== 00:23:08.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.542 [2024-05-15 16:44:15.721452] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.542 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1816070 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ySg2DgbvoM 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ySg2DgbvoM 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ySg2DgbvoM 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ySg2DgbvoM 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.799 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ySg2DgbvoM' 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1817290 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1817290 /var/tmp/bdevperf.sock 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1817290 ']' 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:08.800 16:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.800 [2024-05-15 16:44:15.994808] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:08.800 [2024-05-15 16:44:15.994889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817290 ] 00:23:09.057 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.057 [2024-05-15 16:44:16.071558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.057 [2024-05-15 16:44:16.161200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.057 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.057 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:09.057 16:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:23:09.314 [2024-05-15 16:44:16.510627] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.314 [2024-05-15 16:44:16.510705] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:09.315 [2024-05-15 16:44:16.510720] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ySg2DgbvoM 00:23:09.315 request: 00:23:09.315 { 00:23:09.315 "name": "TLSTEST", 00:23:09.315 "trtype": "tcp", 00:23:09.315 "traddr": "10.0.0.2", 00:23:09.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.315 "adrfam": "ipv4", 00:23:09.315 "trsvcid": "4420", 00:23:09.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.315 "psk": "/tmp/tmp.ySg2DgbvoM", 00:23:09.315 "method": "bdev_nvme_attach_controller", 00:23:09.315 "req_id": 1 00:23:09.315 } 00:23:09.315 Got JSON-RPC error response 00:23:09.315 response: 00:23:09.315 { 00:23:09.315 "code": -1, 00:23:09.315 "message": "Operation not permitted" 00:23:09.315 } 00:23:09.315 16:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1817290 00:23:09.315 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1817290 ']' 00:23:09.315 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1817290 00:23:09.315 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:09.315 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:09.315 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1817290 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1817290' 00:23:09.572 killing process with pid 1817290 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1817290 00:23:09.572 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.572 00:23:09.572 Latency(us) 00:23:09.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.572 =================================================================================================================== 00:23:09.572 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1817290 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1815784 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1815784 ']' 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1815784 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:09.572 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1815784 00:23:09.829 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:09.829 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:09.829 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1815784' 00:23:09.829 killing process with pid 1815784 00:23:09.829 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1815784 00:23:09.829 [2024-05-15 16:44:16.814355] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:09.829 [2024-05-15 16:44:16.814415] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.829 16:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1815784 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1817420 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1817420 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1817420 ']' 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.090 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.090 [2024-05-15 16:44:17.113413] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:10.090 [2024-05-15 16:44:17.113497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.090 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.090 [2024-05-15 16:44:17.192893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.090 [2024-05-15 16:44:17.278595] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.090 [2024-05-15 16:44:17.278662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.090 [2024-05-15 16:44:17.278678] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.090 [2024-05-15 16:44:17.278691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.090 [2024-05-15 16:44:17.278703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.091 [2024-05-15 16:44:17.278743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ySg2DgbvoM 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ySg2DgbvoM 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ySg2DgbvoM 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ySg2DgbvoM 00:23:10.385 16:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.642 [2024-05-15 16:44:17.649259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.642 16:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.899 16:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.156 [2024-05-15 16:44:18.134510] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:11.156 [2024-05-15 16:44:18.134589] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.156 [2024-05-15 16:44:18.134824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.156 16:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.413 malloc0 00:23:11.413 16:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.671 16:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:23:11.929 [2024-05-15 16:44:18.912285] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:11.929 [2024-05-15 16:44:18.912327] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:11.929 [2024-05-15 16:44:18.912360] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:11.929 request: 00:23:11.929 { 00:23:11.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.929 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.929 "psk": "/tmp/tmp.ySg2DgbvoM", 00:23:11.929 "method": "nvmf_subsystem_add_host", 00:23:11.929 "req_id": 1 00:23:11.929 } 00:23:11.929 Got JSON-RPC error response 00:23:11.929 response: 00:23:11.929 { 00:23:11.929 "code": -32603, 00:23:11.929 "message": "Internal error" 00:23:11.929 } 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1817420 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1817420 ']' 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1817420 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1817420 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1817420' 00:23:11.929 killing process with pid 1817420 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1817420 00:23:11.929 [2024-05-15 16:44:18.963144] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:11.929 16:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1817420 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ySg2DgbvoM 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1817704 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1817704 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1817704 ']' 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.187 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.187 [2024-05-15 16:44:19.269772] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:12.187 [2024-05-15 16:44:19.269863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.187 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.187 [2024-05-15 16:44:19.349900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.446 [2024-05-15 16:44:19.438938] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.446 [2024-05-15 16:44:19.438998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.446 [2024-05-15 16:44:19.439024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.446 [2024-05-15 16:44:19.439038] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.446 [2024-05-15 16:44:19.439050] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.446 [2024-05-15 16:44:19.439080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ySg2DgbvoM 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ySg2DgbvoM 00:23:12.446 16:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.703 [2024-05-15 16:44:19.812594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.703 16:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.960 16:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:13.217 [2024-05-15 16:44:20.309875] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:13.217 [2024-05-15 16:44:20.309990] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.217 [2024-05-15 16:44:20.310235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.217 16:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.474 malloc0 00:23:13.474 16:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.731 16:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:23:13.988 [2024-05-15 16:44:21.156267] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1818004 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1818004 /var/tmp/bdevperf.sock 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1818004 ']' 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.988 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.246 [2024-05-15 16:44:21.216645] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:14.246 [2024-05-15 16:44:21.216714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818004 ] 00:23:14.246 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.246 [2024-05-15 16:44:21.282696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.246 [2024-05-15 16:44:21.362320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.246 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.246 16:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.246 16:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:23:14.503 [2024-05-15 16:44:21.693785] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.503 [2024-05-15 16:44:21.693902] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.760 TLSTESTn1 00:23:14.760 16:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:15.018 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:15.018 "subsystems": [ 00:23:15.018 { 00:23:15.018 "subsystem": "keyring", 00:23:15.018 "config": [] 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "subsystem": "iobuf", 00:23:15.018 "config": [ 00:23:15.018 { 00:23:15.018 "method": "iobuf_set_options", 00:23:15.018 "params": { 00:23:15.018 "small_pool_count": 8192, 00:23:15.018 "large_pool_count": 1024, 00:23:15.018 "small_bufsize": 8192, 00:23:15.018 "large_bufsize": 135168 00:23:15.018 } 00:23:15.018 } 00:23:15.018 ] 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "subsystem": "sock", 00:23:15.018 "config": [ 00:23:15.018 { 00:23:15.018 "method": "sock_impl_set_options", 00:23:15.018 "params": { 00:23:15.018 "impl_name": "posix", 00:23:15.018 "recv_buf_size": 2097152, 00:23:15.018 "send_buf_size": 2097152, 00:23:15.018 "enable_recv_pipe": true, 00:23:15.018 "enable_quickack": false, 00:23:15.018 "enable_placement_id": 0, 00:23:15.018 "enable_zerocopy_send_server": true, 00:23:15.018 "enable_zerocopy_send_client": false, 00:23:15.018 "zerocopy_threshold": 0, 00:23:15.018 "tls_version": 0, 00:23:15.018 "enable_ktls": false 00:23:15.018 } 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "method": "sock_impl_set_options", 00:23:15.018 "params": { 00:23:15.018 "impl_name": "ssl", 00:23:15.018 "recv_buf_size": 4096, 00:23:15.018 "send_buf_size": 4096, 00:23:15.018 "enable_recv_pipe": true, 00:23:15.018 "enable_quickack": false, 00:23:15.018 "enable_placement_id": 0, 00:23:15.018 "enable_zerocopy_send_server": true, 00:23:15.018 "enable_zerocopy_send_client": false, 00:23:15.018 "zerocopy_threshold": 0, 00:23:15.018 "tls_version": 0, 00:23:15.018 "enable_ktls": false 00:23:15.018 } 00:23:15.018 } 00:23:15.018 ] 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "subsystem": "vmd", 00:23:15.018 "config": [] 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "subsystem": "accel", 00:23:15.018 "config": [ 00:23:15.018 { 00:23:15.018 "method": "accel_set_options", 00:23:15.018 "params": { 00:23:15.018 "small_cache_size": 128, 00:23:15.018 "large_cache_size": 16, 00:23:15.018 "task_count": 2048, 00:23:15.018 "sequence_count": 2048, 00:23:15.018 "buf_count": 2048 00:23:15.018 } 00:23:15.018 } 00:23:15.018 ] 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "subsystem": "bdev", 00:23:15.018 "config": [ 00:23:15.018 { 00:23:15.018 "method": "bdev_set_options", 00:23:15.018 "params": { 00:23:15.018 "bdev_io_pool_size": 65535, 00:23:15.018 "bdev_io_cache_size": 256, 00:23:15.018 "bdev_auto_examine": true, 00:23:15.018 "iobuf_small_cache_size": 128, 00:23:15.018 "iobuf_large_cache_size": 16 00:23:15.018 } 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "method": "bdev_raid_set_options", 00:23:15.018 "params": { 00:23:15.018 "process_window_size_kb": 1024 00:23:15.018 } 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "method": "bdev_iscsi_set_options", 00:23:15.018 "params": { 00:23:15.018 "timeout_sec": 30 00:23:15.018 } 00:23:15.018 }, 00:23:15.018 { 00:23:15.018 "method": "bdev_nvme_set_options", 00:23:15.018 "params": { 00:23:15.018 "action_on_timeout": "none", 00:23:15.019 "timeout_us": 0, 00:23:15.019 "timeout_admin_us": 0, 00:23:15.019 "keep_alive_timeout_ms": 10000, 00:23:15.019 "arbitration_burst": 0, 00:23:15.019 "low_priority_weight": 0, 00:23:15.019 "medium_priority_weight": 0, 00:23:15.019 "high_priority_weight": 0, 00:23:15.019 "nvme_adminq_poll_period_us": 10000, 00:23:15.019 "nvme_ioq_poll_period_us": 0, 00:23:15.019 "io_queue_requests": 0, 00:23:15.019 "delay_cmd_submit": true, 00:23:15.019 "transport_retry_count": 4, 00:23:15.019 "bdev_retry_count": 3, 00:23:15.019 "transport_ack_timeout": 0, 00:23:15.019 "ctrlr_loss_timeout_sec": 0, 00:23:15.019 "reconnect_delay_sec": 0, 00:23:15.019 "fast_io_fail_timeout_sec": 0, 00:23:15.019 "disable_auto_failback": false, 00:23:15.019 "generate_uuids": false, 00:23:15.019 "transport_tos": 0, 00:23:15.019 "nvme_error_stat": false, 00:23:15.019 "rdma_srq_size": 0, 00:23:15.019 "io_path_stat": false, 00:23:15.019 "allow_accel_sequence": false, 00:23:15.019 "rdma_max_cq_size": 0, 00:23:15.019 "rdma_cm_event_timeout_ms": 0, 00:23:15.019 "dhchap_digests": [ 00:23:15.019 "sha256", 00:23:15.019 "sha384", 00:23:15.019 "sha512" 00:23:15.019 ], 00:23:15.019 "dhchap_dhgroups": [ 00:23:15.019 "null", 00:23:15.019 "ffdhe2048", 00:23:15.019 "ffdhe3072", 00:23:15.019 "ffdhe4096", 00:23:15.019 "ffdhe6144", 00:23:15.019 "ffdhe8192" 00:23:15.019 ] 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "bdev_nvme_set_hotplug", 00:23:15.019 "params": { 00:23:15.019 "period_us": 100000, 00:23:15.019 "enable": false 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "bdev_malloc_create", 00:23:15.019 "params": { 00:23:15.019 "name": "malloc0", 00:23:15.019 "num_blocks": 8192, 00:23:15.019 "block_size": 4096, 00:23:15.019 "physical_block_size": 4096, 00:23:15.019 "uuid": "d6475fd2-cdb7-4586-b404-e787bf000a2c", 00:23:15.019 "optimal_io_boundary": 0 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "bdev_wait_for_examine" 00:23:15.019 } 00:23:15.019 ] 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "subsystem": "nbd", 00:23:15.019 "config": [] 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "subsystem": "scheduler", 00:23:15.019 "config": [ 00:23:15.019 { 00:23:15.019 "method": "framework_set_scheduler", 00:23:15.019 "params": { 00:23:15.019 "name": "static" 00:23:15.019 } 00:23:15.019 } 00:23:15.019 ] 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "subsystem": "nvmf", 00:23:15.019 "config": [ 00:23:15.019 { 00:23:15.019 "method": "nvmf_set_config", 00:23:15.019 "params": { 00:23:15.019 "discovery_filter": "match_any", 00:23:15.019 "admin_cmd_passthru": { 00:23:15.019 "identify_ctrlr": false 00:23:15.019 } 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_set_max_subsystems", 00:23:15.019 "params": { 00:23:15.019 "max_subsystems": 1024 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_set_crdt", 00:23:15.019 "params": { 00:23:15.019 "crdt1": 0, 00:23:15.019 "crdt2": 0, 00:23:15.019 "crdt3": 0 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_create_transport", 00:23:15.019 "params": { 00:23:15.019 "trtype": "TCP", 00:23:15.019 "max_queue_depth": 128, 00:23:15.019 "max_io_qpairs_per_ctrlr": 127, 00:23:15.019 "in_capsule_data_size": 4096, 00:23:15.019 "max_io_size": 131072, 00:23:15.019 "io_unit_size": 131072, 00:23:15.019 "max_aq_depth": 128, 00:23:15.019 "num_shared_buffers": 511, 00:23:15.019 "buf_cache_size": 4294967295, 00:23:15.019 "dif_insert_or_strip": false, 00:23:15.019 "zcopy": false, 00:23:15.019 "c2h_success": false, 00:23:15.019 "sock_priority": 0, 00:23:15.019 "abort_timeout_sec": 1, 00:23:15.019 "ack_timeout": 0, 00:23:15.019 "data_wr_pool_size": 0 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_create_subsystem", 00:23:15.019 "params": { 00:23:15.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.019 "allow_any_host": false, 00:23:15.019 "serial_number": "SPDK00000000000001", 00:23:15.019 "model_number": "SPDK bdev Controller", 00:23:15.019 "max_namespaces": 10, 00:23:15.019 "min_cntlid": 1, 00:23:15.019 "max_cntlid": 65519, 00:23:15.019 "ana_reporting": false 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_subsystem_add_host", 00:23:15.019 "params": { 00:23:15.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.019 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.019 "psk": "/tmp/tmp.ySg2DgbvoM" 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_subsystem_add_ns", 00:23:15.019 "params": { 00:23:15.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.019 "namespace": { 00:23:15.019 "nsid": 1, 00:23:15.019 "bdev_name": "malloc0", 00:23:15.019 "nguid": "D6475FD2CDB74586B404E787BF000A2C", 00:23:15.019 "uuid": "d6475fd2-cdb7-4586-b404-e787bf000a2c", 00:23:15.019 "no_auto_visible": false 00:23:15.019 } 00:23:15.019 } 00:23:15.019 }, 00:23:15.019 { 00:23:15.019 "method": "nvmf_subsystem_add_listener", 00:23:15.019 "params": { 00:23:15.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.019 "listen_address": { 00:23:15.019 "trtype": "TCP", 00:23:15.019 "adrfam": "IPv4", 00:23:15.019 "traddr": "10.0.0.2", 00:23:15.019 "trsvcid": "4420" 00:23:15.019 }, 00:23:15.019 "secure_channel": true 00:23:15.019 } 00:23:15.019 } 00:23:15.019 ] 00:23:15.019 } 00:23:15.019 ] 00:23:15.019 }' 00:23:15.019 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.276 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:15.276 "subsystems": [ 00:23:15.276 { 00:23:15.276 "subsystem": "keyring", 00:23:15.276 "config": [] 00:23:15.276 }, 00:23:15.276 { 00:23:15.276 "subsystem": "iobuf", 00:23:15.276 "config": [ 00:23:15.276 { 00:23:15.276 "method": "iobuf_set_options", 00:23:15.276 "params": { 00:23:15.277 "small_pool_count": 8192, 00:23:15.277 "large_pool_count": 1024, 00:23:15.277 "small_bufsize": 8192, 00:23:15.277 "large_bufsize": 135168 00:23:15.277 } 00:23:15.277 } 00:23:15.277 ] 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "subsystem": "sock", 00:23:15.277 "config": [ 00:23:15.277 { 00:23:15.277 "method": "sock_impl_set_options", 00:23:15.277 "params": { 00:23:15.277 "impl_name": "posix", 00:23:15.277 "recv_buf_size": 2097152, 00:23:15.277 "send_buf_size": 2097152, 00:23:15.277 "enable_recv_pipe": true, 00:23:15.277 "enable_quickack": false, 00:23:15.277 "enable_placement_id": 0, 00:23:15.277 "enable_zerocopy_send_server": true, 00:23:15.277 "enable_zerocopy_send_client": false, 00:23:15.277 "zerocopy_threshold": 0, 00:23:15.277 "tls_version": 0, 00:23:15.277 "enable_ktls": false 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "sock_impl_set_options", 00:23:15.277 "params": { 00:23:15.277 "impl_name": "ssl", 00:23:15.277 "recv_buf_size": 4096, 00:23:15.277 "send_buf_size": 4096, 00:23:15.277 "enable_recv_pipe": true, 00:23:15.277 "enable_quickack": false, 00:23:15.277 "enable_placement_id": 0, 00:23:15.277 "enable_zerocopy_send_server": true, 00:23:15.277 "enable_zerocopy_send_client": false, 00:23:15.277 "zerocopy_threshold": 0, 00:23:15.277 "tls_version": 0, 00:23:15.277 "enable_ktls": false 00:23:15.277 } 00:23:15.277 } 00:23:15.277 ] 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "subsystem": "vmd", 00:23:15.277 "config": [] 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "subsystem": "accel", 00:23:15.277 "config": [ 00:23:15.277 { 00:23:15.277 "method": "accel_set_options", 00:23:15.277 "params": { 00:23:15.277 "small_cache_size": 128, 00:23:15.277 "large_cache_size": 16, 00:23:15.277 "task_count": 2048, 00:23:15.277 "sequence_count": 2048, 00:23:15.277 "buf_count": 2048 00:23:15.277 } 00:23:15.277 } 00:23:15.277 ] 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "subsystem": "bdev", 00:23:15.277 "config": [ 00:23:15.277 { 00:23:15.277 "method": "bdev_set_options", 00:23:15.277 "params": { 00:23:15.277 "bdev_io_pool_size": 65535, 00:23:15.277 "bdev_io_cache_size": 256, 00:23:15.277 "bdev_auto_examine": true, 00:23:15.277 "iobuf_small_cache_size": 128, 00:23:15.277 "iobuf_large_cache_size": 16 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "bdev_raid_set_options", 00:23:15.277 "params": { 00:23:15.277 "process_window_size_kb": 1024 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "bdev_iscsi_set_options", 00:23:15.277 "params": { 00:23:15.277 "timeout_sec": 30 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "bdev_nvme_set_options", 00:23:15.277 "params": { 00:23:15.277 "action_on_timeout": "none", 00:23:15.277 "timeout_us": 0, 00:23:15.277 "timeout_admin_us": 0, 00:23:15.277 "keep_alive_timeout_ms": 10000, 00:23:15.277 "arbitration_burst": 0, 00:23:15.277 "low_priority_weight": 0, 00:23:15.277 "medium_priority_weight": 0, 00:23:15.277 "high_priority_weight": 0, 00:23:15.277 "nvme_adminq_poll_period_us": 10000, 00:23:15.277 "nvme_ioq_poll_period_us": 0, 00:23:15.277 "io_queue_requests": 512, 00:23:15.277 "delay_cmd_submit": true, 00:23:15.277 "transport_retry_count": 4, 00:23:15.277 "bdev_retry_count": 3, 00:23:15.277 "transport_ack_timeout": 0, 00:23:15.277 "ctrlr_loss_timeout_sec": 0, 00:23:15.277 "reconnect_delay_sec": 0, 00:23:15.277 "fast_io_fail_timeout_sec": 0, 00:23:15.277 "disable_auto_failback": false, 00:23:15.277 "generate_uuids": false, 00:23:15.277 "transport_tos": 0, 00:23:15.277 "nvme_error_stat": false, 00:23:15.277 "rdma_srq_size": 0, 00:23:15.277 "io_path_stat": false, 00:23:15.277 "allow_accel_sequence": false, 00:23:15.277 "rdma_max_cq_size": 0, 00:23:15.277 "rdma_cm_event_timeout_ms": 0, 00:23:15.277 "dhchap_digests": [ 00:23:15.277 "sha256", 00:23:15.277 "sha384", 00:23:15.277 "sha512" 00:23:15.277 ], 00:23:15.277 "dhchap_dhgroups": [ 00:23:15.277 "null", 00:23:15.277 "ffdhe2048", 00:23:15.277 "ffdhe3072", 00:23:15.277 "ffdhe4096", 00:23:15.277 "ffdhe6144", 00:23:15.277 "ffdhe8192" 00:23:15.277 ] 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "bdev_nvme_attach_controller", 00:23:15.277 "params": { 00:23:15.277 "name": "TLSTEST", 00:23:15.277 "trtype": "TCP", 00:23:15.277 "adrfam": "IPv4", 00:23:15.277 "traddr": "10.0.0.2", 00:23:15.277 "trsvcid": "4420", 00:23:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.277 "prchk_reftag": false, 00:23:15.277 "prchk_guard": false, 00:23:15.277 "ctrlr_loss_timeout_sec": 0, 00:23:15.277 "reconnect_delay_sec": 0, 00:23:15.277 "fast_io_fail_timeout_sec": 0, 00:23:15.277 "psk": "/tmp/tmp.ySg2DgbvoM", 00:23:15.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.277 "hdgst": false, 00:23:15.277 "ddgst": false 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "bdev_nvme_set_hotplug", 00:23:15.277 "params": { 00:23:15.277 "period_us": 100000, 00:23:15.277 "enable": false 00:23:15.277 } 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "method": "bdev_wait_for_examine" 00:23:15.277 } 00:23:15.277 ] 00:23:15.277 }, 00:23:15.277 { 00:23:15.277 "subsystem": "nbd", 00:23:15.277 "config": [] 00:23:15.277 } 00:23:15.277 ] 00:23:15.277 }' 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1818004 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1818004 ']' 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1818004 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1818004 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1818004' 00:23:15.277 killing process with pid 1818004 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1818004 00:23:15.277 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.277 00:23:15.277 Latency(us) 00:23:15.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.277 =================================================================================================================== 00:23:15.277 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.277 [2024-05-15 16:44:22.420859] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.277 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1818004 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1817704 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1817704 ']' 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1817704 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1817704 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1817704' 00:23:15.534 killing process with pid 1817704 00:23:15.534 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1817704 00:23:15.534 [2024-05-15 16:44:22.678336] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:15.534 [2024-05-15 16:44:22.678397] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1817704 00:23:15.534 removal in v24.09 hit 1 times 00:23:15.793 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:15.793 16:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.793 16:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:15.793 "subsystems": [ 00:23:15.793 { 00:23:15.793 "subsystem": "keyring", 00:23:15.793 "config": [] 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "subsystem": "iobuf", 00:23:15.793 "config": [ 00:23:15.793 { 00:23:15.793 "method": "iobuf_set_options", 00:23:15.793 "params": { 00:23:15.793 "small_pool_count": 8192, 00:23:15.793 "large_pool_count": 1024, 00:23:15.793 "small_bufsize": 8192, 00:23:15.793 "large_bufsize": 135168 00:23:15.793 } 00:23:15.793 } 00:23:15.793 ] 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "subsystem": "sock", 00:23:15.793 "config": [ 00:23:15.793 { 00:23:15.793 "method": "sock_impl_set_options", 00:23:15.793 "params": { 00:23:15.793 "impl_name": "posix", 00:23:15.793 "recv_buf_size": 2097152, 00:23:15.793 "send_buf_size": 2097152, 00:23:15.793 "enable_recv_pipe": true, 00:23:15.793 "enable_quickack": false, 00:23:15.793 "enable_placement_id": 0, 00:23:15.793 "enable_zerocopy_send_server": true, 00:23:15.793 "enable_zerocopy_send_client": false, 00:23:15.793 "zerocopy_threshold": 0, 00:23:15.793 "tls_version": 0, 00:23:15.793 "enable_ktls": false 00:23:15.793 } 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "method": "sock_impl_set_options", 00:23:15.793 "params": { 00:23:15.793 "impl_name": "ssl", 00:23:15.793 "recv_buf_size": 4096, 00:23:15.793 "send_buf_size": 4096, 00:23:15.793 "enable_recv_pipe": true, 00:23:15.793 "enable_quickack": false, 00:23:15.793 "enable_placement_id": 0, 00:23:15.793 "enable_zerocopy_send_server": true, 00:23:15.793 "enable_zerocopy_send_client": false, 00:23:15.793 "zerocopy_threshold": 0, 00:23:15.793 "tls_version": 0, 00:23:15.793 "enable_ktls": false 00:23:15.793 } 00:23:15.793 } 00:23:15.793 ] 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "subsystem": "vmd", 00:23:15.793 "config": [] 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "subsystem": "accel", 00:23:15.793 "config": [ 00:23:15.793 { 00:23:15.793 "method": "accel_set_options", 00:23:15.793 "params": { 00:23:15.793 "small_cache_size": 128, 00:23:15.793 "large_cache_size": 16, 00:23:15.793 "task_count": 2048, 00:23:15.793 "sequence_count": 2048, 00:23:15.793 "buf_count": 2048 00:23:15.793 } 00:23:15.793 } 00:23:15.793 ] 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "subsystem": "bdev", 00:23:15.793 "config": [ 00:23:15.793 { 00:23:15.793 "method": "bdev_set_options", 00:23:15.793 "params": { 00:23:15.793 "bdev_io_pool_size": 65535, 00:23:15.793 "bdev_io_cache_size": 256, 00:23:15.793 "bdev_auto_examine": true, 00:23:15.793 "iobuf_small_cache_size": 128, 00:23:15.793 "iobuf_large_cache_size": 16 00:23:15.793 } 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "method": "bdev_raid_set_options", 00:23:15.793 "params": { 00:23:15.793 "process_window_size_kb": 1024 00:23:15.793 } 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "method": "bdev_iscsi_set_options", 00:23:15.793 "params": { 00:23:15.793 "timeout_sec": 30 00:23:15.793 } 00:23:15.793 }, 00:23:15.793 { 00:23:15.793 "method": "bdev_nvme_set_options", 00:23:15.793 "params": { 00:23:15.793 "action_on_timeout": "none", 00:23:15.793 "timeout_us": 0, 00:23:15.793 "timeout_admin_us": 0, 00:23:15.793 "keep_alive_timeout_ms": 10000, 00:23:15.793 "arbitration_burst": 0, 00:23:15.793 "low_priority_weight": 0, 00:23:15.793 "medium_priority_weight": 0, 00:23:15.793 "high_priority_weight": 0, 00:23:15.793 "nvme_adminq_poll_period_us": 10000, 00:23:15.793 "nvme_ioq_poll_period_us": 0, 00:23:15.793 "io_queue_requests": 0, 00:23:15.793 "delay_cmd_submit": true, 00:23:15.793 "transport_retry_count": 4, 00:23:15.793 "bdev_retry_count": 3, 00:23:15.793 "transport_ack_timeout": 0, 00:23:15.793 "ctrlr_loss_timeout_sec": 0, 00:23:15.793 "reconnect_delay_sec": 0, 00:23:15.793 "fast_io_fail_timeout_sec": 0, 00:23:15.793 "disable_auto_failback": false, 00:23:15.793 "generate_uuids": false, 00:23:15.793 "transport_tos": 0, 00:23:15.793 "nvme_error_stat": false, 00:23:15.793 "rdma_srq_size": 0, 00:23:15.794 "io_path_stat": false, 00:23:15.794 "allow_accel_sequence": false, 00:23:15.794 "rdma_max_cq_size": 0, 00:23:15.794 "rdma_cm_event_timeout_ms": 0, 00:23:15.794 "dhchap_digests": [ 00:23:15.794 "sha256", 00:23:15.794 "sha384", 00:23:15.794 "sha512" 00:23:15.794 ], 00:23:15.794 "dhchap_dhgroups": [ 00:23:15.794 "null", 00:23:15.794 "ffdhe2048", 00:23:15.794 "ffdhe3072", 00:23:15.794 "ffdhe4096", 00:23:15.794 "ffdhe6144", 00:23:15.794 "ffdhe8192" 00:23:15.794 ] 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "bdev_nvme_set_hotplug", 00:23:15.794 "params": { 00:23:15.794 "period_us": 100000, 00:23:15.794 "enable": false 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "bdev_malloc_create", 00:23:15.794 "params": { 00:23:15.794 "name": "malloc0", 00:23:15.794 "num_blocks": 8192, 00:23:15.794 "block_size": 4096, 00:23:15.794 "physical_block_size": 4096, 00:23:15.794 "uuid": "d6475fd2-cdb7-4586-b404-e787bf000a2c", 00:23:15.794 "optimal_io_boundary": 0 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "bdev_wait_for_examine" 00:23:15.794 } 00:23:15.794 ] 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "subsystem": "nbd", 00:23:15.794 "config": [] 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "subsystem": "scheduler", 00:23:15.794 "config": [ 00:23:15.794 { 00:23:15.794 "method": "framework_set_scheduler", 00:23:15.794 "params": { 00:23:15.794 "name": "static" 00:23:15.794 } 00:23:15.794 } 00:23:15.794 ] 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "subsystem": "nvmf", 00:23:15.794 "config": [ 00:23:15.794 { 00:23:15.794 "method": "nvmf_set_config", 00:23:15.794 "params": { 00:23:15.794 "discovery_filter": "match_any", 00:23:15.794 "admin_cmd_passthru": { 00:23:15.794 "identify_ctrlr": false 00:23:15.794 } 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_set_max_subsystems", 00:23:15.794 "params": { 00:23:15.794 "max_subsystems": 1024 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_set_crdt", 00:23:15.794 "params": { 00:23:15.794 "crdt1": 0, 00:23:15.794 "crdt2": 0, 00:23:15.794 "crdt3": 0 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_create_transport", 00:23:15.794 "params": { 00:23:15.794 "trtype": "TCP", 00:23:15.794 "max_queue_depth": 128, 00:23:15.794 "max_io_qpairs_per_ctrlr": 127, 00:23:15.794 "in_capsule_data_size": 4096, 00:23:15.794 "max_io_size": 131072, 00:23:15.794 "io_unit_size": 131072, 00:23:15.794 "max_aq_depth": 128, 00:23:15.794 "num_shared_buffers": 511, 00:23:15.794 "buf_cache_size": 4294967295, 00:23:15.794 "dif_insert_or_strip": false, 00:23:15.794 "zcopy": false, 00:23:15.794 "c2h_success": false, 00:23:15.794 "sock_priority": 0, 00:23:15.794 "abort_timeout_sec": 1, 00:23:15.794 "ack_timeout": 0, 00:23:15.794 "data_wr_pool_size": 0 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_create_subsystem", 00:23:15.794 "params": { 00:23:15.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.794 "allow_any_host": false, 00:23:15.794 "serial_number": "SPDK00000000000001", 00:23:15.794 "model_number": "SPDK bdev Controller", 00:23:15.794 "max_namespaces": 10, 00:23:15.794 "min_cntlid": 1, 00:23:15.794 "max_cntlid": 65519, 00:23:15.794 "ana_reporting": false 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_subsystem_add_host", 00:23:15.794 "params": { 00:23:15.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.794 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.794 "psk": "/tmp/tmp.ySg2DgbvoM" 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_subsystem_add_ns", 00:23:15.794 "params": { 00:23:15.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.794 "namespace": { 00:23:15.794 "nsid": 1, 00:23:15.794 "bdev_name": "malloc0", 00:23:15.794 "nguid": "D6475FD2CDB74586B404E787BF000A2C", 00:23:15.794 "uuid": "d6475fd2-cdb7-4586-b404-e787bf000a2c", 00:23:15.794 "no_auto_visible": false 00:23:15.794 } 00:23:15.794 } 00:23:15.794 }, 00:23:15.794 { 00:23:15.794 "method": "nvmf_subsystem_add_listener", 00:23:15.794 "params": { 00:23:15.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.794 "listen_address": { 00:23:15.794 "trtype": "TCP", 00:23:15.794 "adrfam": "IPv4", 00:23:15.794 "traddr": "10.0.0.2", 00:23:15.794 "trsvcid": "4420" 00:23:15.794 }, 00:23:15.794 "secure_channel": true 00:23:15.794 } 00:23:15.794 } 00:23:15.794 ] 00:23:15.794 } 00:23:15.794 ] 00:23:15.794 }' 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1818277 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1818277 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1818277 ']' 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.794 16:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.794 [2024-05-15 16:44:22.981973] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:15.794 [2024-05-15 16:44:22.982060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.794 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.052 [2024-05-15 16:44:23.061111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.052 [2024-05-15 16:44:23.146064] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.052 [2024-05-15 16:44:23.146130] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.052 [2024-05-15 16:44:23.146148] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.052 [2024-05-15 16:44:23.146162] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.052 [2024-05-15 16:44:23.146173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.052 [2024-05-15 16:44:23.146284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.309 [2024-05-15 16:44:23.364866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.309 [2024-05-15 16:44:23.380806] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:16.309 [2024-05-15 16:44:23.396828] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:16.309 [2024-05-15 16:44:23.396894] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.309 [2024-05-15 16:44:23.409429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1818421 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1818421 /var/tmp/bdevperf.sock 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1818421 ']' 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.874 16:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:16.874 "subsystems": [ 00:23:16.874 { 00:23:16.874 "subsystem": "keyring", 00:23:16.874 "config": [] 00:23:16.874 }, 00:23:16.874 { 00:23:16.874 "subsystem": "iobuf", 00:23:16.874 "config": [ 00:23:16.874 { 00:23:16.874 "method": "iobuf_set_options", 00:23:16.874 "params": { 00:23:16.874 "small_pool_count": 8192, 00:23:16.874 "large_pool_count": 1024, 00:23:16.874 "small_bufsize": 8192, 00:23:16.874 "large_bufsize": 135168 00:23:16.874 } 00:23:16.874 } 00:23:16.874 ] 00:23:16.874 }, 00:23:16.874 { 00:23:16.874 "subsystem": "sock", 00:23:16.874 "config": [ 00:23:16.874 { 00:23:16.874 "method": "sock_impl_set_options", 00:23:16.874 "params": { 00:23:16.874 "impl_name": "posix", 00:23:16.874 "recv_buf_size": 2097152, 00:23:16.874 "send_buf_size": 2097152, 00:23:16.874 "enable_recv_pipe": true, 00:23:16.874 "enable_quickack": false, 00:23:16.874 "enable_placement_id": 0, 00:23:16.874 "enable_zerocopy_send_server": true, 00:23:16.874 "enable_zerocopy_send_client": false, 00:23:16.874 "zerocopy_threshold": 0, 00:23:16.874 "tls_version": 0, 00:23:16.874 "enable_ktls": false 00:23:16.874 } 00:23:16.874 }, 00:23:16.874 { 00:23:16.874 "method": "sock_impl_set_options", 00:23:16.874 "params": { 00:23:16.874 "impl_name": "ssl", 00:23:16.874 "recv_buf_size": 4096, 00:23:16.874 "send_buf_size": 4096, 00:23:16.874 "enable_recv_pipe": true, 00:23:16.875 "enable_quickack": false, 00:23:16.875 "enable_placement_id": 0, 00:23:16.875 "enable_zerocopy_send_server": true, 00:23:16.875 "enable_zerocopy_send_client": false, 00:23:16.875 "zerocopy_threshold": 0, 00:23:16.875 "tls_version": 0, 00:23:16.875 "enable_ktls": false 00:23:16.875 } 00:23:16.875 } 00:23:16.875 ] 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "subsystem": "vmd", 00:23:16.875 "config": [] 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "subsystem": "accel", 00:23:16.875 "config": [ 00:23:16.875 { 00:23:16.875 "method": "accel_set_options", 00:23:16.875 "params": { 00:23:16.875 "small_cache_size": 128, 00:23:16.875 "large_cache_size": 16, 00:23:16.875 "task_count": 2048, 00:23:16.875 "sequence_count": 2048, 00:23:16.875 "buf_count": 2048 00:23:16.875 } 00:23:16.875 } 00:23:16.875 ] 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "subsystem": "bdev", 00:23:16.875 "config": [ 00:23:16.875 { 00:23:16.875 "method": "bdev_set_options", 00:23:16.875 "params": { 00:23:16.875 "bdev_io_pool_size": 65535, 00:23:16.875 "bdev_io_cache_size": 256, 00:23:16.875 "bdev_auto_examine": true, 00:23:16.875 "iobuf_small_cache_size": 128, 00:23:16.875 "iobuf_large_cache_size": 16 00:23:16.875 } 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "method": "bdev_raid_set_options", 00:23:16.875 "params": { 00:23:16.875 "process_window_size_kb": 1024 00:23:16.875 } 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "method": "bdev_iscsi_set_options", 00:23:16.875 "params": { 00:23:16.875 "timeout_sec": 30 00:23:16.875 } 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "method": "bdev_nvme_set_options", 00:23:16.875 "params": { 00:23:16.875 "action_on_timeout": "none", 00:23:16.875 "timeout_us": 0, 00:23:16.875 "timeout_admin_us": 0, 00:23:16.875 "keep_alive_timeout_ms": 10000, 00:23:16.875 "arbitration_burst": 0, 00:23:16.875 "low_priority_weight": 0, 00:23:16.875 "medium_priority_weight": 0, 00:23:16.875 "high_priority_weight": 0, 00:23:16.875 "nvme_adminq_poll_period_us": 10000, 00:23:16.875 "nvme_ioq_poll_period_us": 0, 00:23:16.875 "io_queue_requests": 512, 00:23:16.875 "delay_cmd_submit": true, 00:23:16.875 "transport_retry_count": 4, 00:23:16.875 "bdev_retry_count": 3, 00:23:16.875 "transport_ack_timeout": 0, 00:23:16.875 "ctrlr_loss_timeout_sec": 0, 00:23:16.875 "reconnect_delay_sec": 0, 00:23:16.875 "fast_io_fail_timeout_sec": 0, 00:23:16.875 "disable_auto_failback": false, 00:23:16.875 "generate_uuids": false, 00:23:16.875 "transport_tos": 0, 00:23:16.875 "nvme_error_stat": false, 00:23:16.875 "rdma_srq_size": 0, 00:23:16.875 "io_path_stat": false, 00:23:16.875 "allow_accel_sequence": false, 00:23:16.875 "rdma_max_cq_size": 0, 00:23:16.875 "rdma_cm_event_timeout_ms": 0, 00:23:16.875 "dhchap_digests": [ 00:23:16.875 "sha256", 00:23:16.875 "sha384", 00:23:16.875 "sha512" 00:23:16.875 ], 00:23:16.875 "dhchap_dhgroups": [ 00:23:16.875 "null", 00:23:16.875 "ffdhe2048", 00:23:16.875 "ffdhe3072", 00:23:16.875 "ffdhe4096", 00:23:16.875 "ffdhe6144", 00:23:16.875 "ffdhe8192" 00:23:16.875 ] 00:23:16.875 } 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "method": "bdev_nvme_attach_controller", 00:23:16.875 "params": { 00:23:16.875 "name": "TLSTEST", 00:23:16.875 "trtype": "TCP", 00:23:16.875 "adrfam": "IPv4", 00:23:16.875 "traddr": "10.0.0.2", 00:23:16.875 "trsvcid": "4420", 00:23:16.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.875 "prchk_reftag": false, 00:23:16.875 "prchk_guard": false, 00:23:16.875 "ctrlr_loss_timeout_sec": 0, 00:23:16.875 "reconnect_delay_sec": 0, 00:23:16.875 "fast_io_fail_timeout_sec": 0, 00:23:16.875 "psk": "/tmp/tmp.ySg2DgbvoM", 00:23:16.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.875 "hdgst": false, 00:23:16.875 "ddgst": false 00:23:16.875 } 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "method": "bdev_nvme_set_hotplug", 00:23:16.875 "params": { 00:23:16.875 "period_us": 100000, 00:23:16.875 "enable": false 00:23:16.875 } 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "method": "bdev_wait_for_examine" 00:23:16.875 } 00:23:16.875 ] 00:23:16.875 }, 00:23:16.875 { 00:23:16.875 "subsystem": "nbd", 00:23:16.875 "config": [] 00:23:16.875 } 00:23:16.875 ] 00:23:16.875 }' 00:23:16.875 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.875 16:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.875 [2024-05-15 16:44:23.996412] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:16.875 [2024-05-15 16:44:23.996486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818421 ] 00:23:16.875 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.875 [2024-05-15 16:44:24.061837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.133 [2024-05-15 16:44:24.142597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.133 [2024-05-15 16:44:24.302568] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.133 [2024-05-15 16:44:24.302703] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:18.063 16:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:18.063 16:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:18.063 16:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:18.063 Running I/O for 10 seconds... 00:23:28.020 00:23:28.020 Latency(us) 00:23:28.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.020 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.020 Verification LBA range: start 0x0 length 0x2000 00:23:28.020 TLSTESTn1 : 10.04 1627.98 6.36 0.00 0.00 78491.97 14466.47 71458.51 00:23:28.020 =================================================================================================================== 00:23:28.020 Total : 1627.98 6.36 0.00 0.00 78491.97 14466.47 71458.51 00:23:28.020 0 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1818421 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1818421 ']' 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1818421 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1818421 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1818421' 00:23:28.020 killing process with pid 1818421 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1818421 00:23:28.020 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.020 00:23:28.020 Latency(us) 00:23:28.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.020 =================================================================================================================== 00:23:28.020 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.020 [2024-05-15 16:44:35.200764] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.020 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1818421 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1818277 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1818277 ']' 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1818277 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1818277 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1818277' 00:23:28.277 killing process with pid 1818277 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1818277 00:23:28.277 [2024-05-15 16:44:35.449350] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:28.277 [2024-05-15 16:44:35.449414] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.277 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1818277 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1819751 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1819751 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1819751 ']' 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.535 16:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.535 [2024-05-15 16:44:35.759272] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:28.535 [2024-05-15 16:44:35.759351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.793 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.793 [2024-05-15 16:44:35.837305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.793 [2024-05-15 16:44:35.920978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.793 [2024-05-15 16:44:35.921037] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.793 [2024-05-15 16:44:35.921075] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.793 [2024-05-15 16:44:35.921089] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.793 [2024-05-15 16:44:35.921100] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.793 [2024-05-15 16:44:35.921149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ySg2DgbvoM 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ySg2DgbvoM 00:23:29.051 16:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.309 [2024-05-15 16:44:36.285167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.309 16:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.567 16:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.567 [2024-05-15 16:44:36.774475] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:29.567 [2024-05-15 16:44:36.774600] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.567 [2024-05-15 16:44:36.774819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.567 16:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:30.132 malloc0 00:23:30.132 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.390 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ySg2DgbvoM 00:23:30.648 [2024-05-15 16:44:37.649045] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1819991 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1819991 /var/tmp/bdevperf.sock 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1819991 ']' 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.648 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.649 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.649 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.649 [2024-05-15 16:44:37.712176] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:30.649 [2024-05-15 16:44:37.712257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819991 ] 00:23:30.649 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.649 [2024-05-15 16:44:37.782942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.649 [2024-05-15 16:44:37.869701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.937 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.937 16:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:30.937 16:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ySg2DgbvoM 00:23:31.194 16:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:31.452 [2024-05-15 16:44:38.454318] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.452 nvme0n1 00:23:31.452 16:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.452 Running I/O for 1 seconds... 00:23:32.822 00:23:32.822 Latency(us) 00:23:32.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.822 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.822 Verification LBA range: start 0x0 length 0x2000 00:23:32.822 nvme0n1 : 1.03 3258.40 12.73 0.00 0.00 38722.63 7815.77 46991.74 00:23:32.822 =================================================================================================================== 00:23:32.822 Total : 3258.40 12.73 0.00 0.00 38722.63 7815.77 46991.74 00:23:32.822 0 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1819991 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1819991 ']' 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1819991 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1819991 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1819991' 00:23:32.822 killing process with pid 1819991 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1819991 00:23:32.822 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.822 00:23:32.822 Latency(us) 00:23:32.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.822 =================================================================================================================== 00:23:32.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1819991 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1819751 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1819751 ']' 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1819751 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1819751 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1819751' 00:23:32.822 killing process with pid 1819751 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1819751 00:23:32.822 [2024-05-15 16:44:39.985735] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:32.822 [2024-05-15 16:44:39.985785] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:32.822 16:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1819751 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1820324 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1820324 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1820324 ']' 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.080 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.080 [2024-05-15 16:44:40.283845] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:33.080 [2024-05-15 16:44:40.283920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.338 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.338 [2024-05-15 16:44:40.357902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.338 [2024-05-15 16:44:40.440758] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.338 [2024-05-15 16:44:40.440821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.338 [2024-05-15 16:44:40.440849] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.338 [2024-05-15 16:44:40.440862] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.338 [2024-05-15 16:44:40.440873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.338 [2024-05-15 16:44:40.440906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.338 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.338 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.338 16:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.338 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.338 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.595 [2024-05-15 16:44:40.580127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.595 malloc0 00:23:33.595 [2024-05-15 16:44:40.612088] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:33.595 [2024-05-15 16:44:40.612189] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.595 [2024-05-15 16:44:40.612449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1820349 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1820349 /var/tmp/bdevperf.sock 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1820349 ']' 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.595 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.595 [2024-05-15 16:44:40.682833] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:33.595 [2024-05-15 16:44:40.682905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820349 ] 00:23:33.595 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.595 [2024-05-15 16:44:40.754192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.853 [2024-05-15 16:44:40.842766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.853 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.853 16:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.853 16:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ySg2DgbvoM 00:23:34.109 16:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:34.365 [2024-05-15 16:44:41.525342] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.622 nvme0n1 00:23:34.622 16:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.622 Running I/O for 1 seconds... 00:23:35.553 00:23:35.553 Latency(us) 00:23:35.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.553 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.553 Verification LBA range: start 0x0 length 0x2000 00:23:35.553 nvme0n1 : 1.03 3419.57 13.36 0.00 0.00 37003.15 6359.42 54370.61 00:23:35.553 =================================================================================================================== 00:23:35.553 Total : 3419.57 13.36 0.00 0.00 37003.15 6359.42 54370.61 00:23:35.553 0 00:23:35.553 16:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:35.553 16:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.553 16:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.811 16:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.811 16:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:35.811 "subsystems": [ 00:23:35.811 { 00:23:35.811 "subsystem": "keyring", 00:23:35.811 "config": [ 00:23:35.811 { 00:23:35.811 "method": "keyring_file_add_key", 00:23:35.811 "params": { 00:23:35.811 "name": "key0", 00:23:35.811 "path": "/tmp/tmp.ySg2DgbvoM" 00:23:35.811 } 00:23:35.811 } 00:23:35.811 ] 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "subsystem": "iobuf", 00:23:35.811 "config": [ 00:23:35.811 { 00:23:35.811 "method": "iobuf_set_options", 00:23:35.811 "params": { 00:23:35.811 "small_pool_count": 8192, 00:23:35.811 "large_pool_count": 1024, 00:23:35.811 "small_bufsize": 8192, 00:23:35.811 "large_bufsize": 135168 00:23:35.811 } 00:23:35.811 } 00:23:35.811 ] 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "subsystem": "sock", 00:23:35.811 "config": [ 00:23:35.811 { 00:23:35.811 "method": "sock_impl_set_options", 00:23:35.811 "params": { 00:23:35.811 "impl_name": "posix", 00:23:35.811 "recv_buf_size": 2097152, 00:23:35.811 "send_buf_size": 2097152, 00:23:35.811 "enable_recv_pipe": true, 00:23:35.811 "enable_quickack": false, 00:23:35.811 "enable_placement_id": 0, 00:23:35.811 "enable_zerocopy_send_server": true, 00:23:35.811 "enable_zerocopy_send_client": false, 00:23:35.811 "zerocopy_threshold": 0, 00:23:35.811 "tls_version": 0, 00:23:35.811 "enable_ktls": false 00:23:35.811 } 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "method": "sock_impl_set_options", 00:23:35.811 "params": { 00:23:35.811 "impl_name": "ssl", 00:23:35.811 "recv_buf_size": 4096, 00:23:35.811 "send_buf_size": 4096, 00:23:35.811 "enable_recv_pipe": true, 00:23:35.811 "enable_quickack": false, 00:23:35.811 "enable_placement_id": 0, 00:23:35.811 "enable_zerocopy_send_server": true, 00:23:35.811 "enable_zerocopy_send_client": false, 00:23:35.811 "zerocopy_threshold": 0, 00:23:35.811 "tls_version": 0, 00:23:35.811 "enable_ktls": false 00:23:35.811 } 00:23:35.811 } 00:23:35.811 ] 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "subsystem": "vmd", 00:23:35.811 "config": [] 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "subsystem": "accel", 00:23:35.811 "config": [ 00:23:35.811 { 00:23:35.811 "method": "accel_set_options", 00:23:35.811 "params": { 00:23:35.811 "small_cache_size": 128, 00:23:35.811 "large_cache_size": 16, 00:23:35.811 "task_count": 2048, 00:23:35.811 "sequence_count": 2048, 00:23:35.811 "buf_count": 2048 00:23:35.811 } 00:23:35.811 } 00:23:35.811 ] 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "subsystem": "bdev", 00:23:35.811 "config": [ 00:23:35.811 { 00:23:35.811 "method": "bdev_set_options", 00:23:35.811 "params": { 00:23:35.811 "bdev_io_pool_size": 65535, 00:23:35.811 "bdev_io_cache_size": 256, 00:23:35.811 "bdev_auto_examine": true, 00:23:35.811 "iobuf_small_cache_size": 128, 00:23:35.811 "iobuf_large_cache_size": 16 00:23:35.811 } 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "method": "bdev_raid_set_options", 00:23:35.811 "params": { 00:23:35.811 "process_window_size_kb": 1024 00:23:35.811 } 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "method": "bdev_iscsi_set_options", 00:23:35.811 "params": { 00:23:35.811 "timeout_sec": 30 00:23:35.811 } 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "method": "bdev_nvme_set_options", 00:23:35.811 "params": { 00:23:35.811 "action_on_timeout": "none", 00:23:35.811 "timeout_us": 0, 00:23:35.811 "timeout_admin_us": 0, 00:23:35.811 "keep_alive_timeout_ms": 10000, 00:23:35.811 "arbitration_burst": 0, 00:23:35.811 "low_priority_weight": 0, 00:23:35.811 "medium_priority_weight": 0, 00:23:35.811 "high_priority_weight": 0, 00:23:35.811 "nvme_adminq_poll_period_us": 10000, 00:23:35.811 "nvme_ioq_poll_period_us": 0, 00:23:35.811 "io_queue_requests": 0, 00:23:35.811 "delay_cmd_submit": true, 00:23:35.811 "transport_retry_count": 4, 00:23:35.811 "bdev_retry_count": 3, 00:23:35.811 "transport_ack_timeout": 0, 00:23:35.811 "ctrlr_loss_timeout_sec": 0, 00:23:35.811 "reconnect_delay_sec": 0, 00:23:35.811 "fast_io_fail_timeout_sec": 0, 00:23:35.811 "disable_auto_failback": false, 00:23:35.811 "generate_uuids": false, 00:23:35.811 "transport_tos": 0, 00:23:35.811 "nvme_error_stat": false, 00:23:35.811 "rdma_srq_size": 0, 00:23:35.811 "io_path_stat": false, 00:23:35.811 "allow_accel_sequence": false, 00:23:35.811 "rdma_max_cq_size": 0, 00:23:35.811 "rdma_cm_event_timeout_ms": 0, 00:23:35.811 "dhchap_digests": [ 00:23:35.811 "sha256", 00:23:35.811 "sha384", 00:23:35.811 "sha512" 00:23:35.811 ], 00:23:35.811 "dhchap_dhgroups": [ 00:23:35.811 "null", 00:23:35.811 "ffdhe2048", 00:23:35.811 "ffdhe3072", 00:23:35.811 "ffdhe4096", 00:23:35.811 "ffdhe6144", 00:23:35.811 "ffdhe8192" 00:23:35.811 ] 00:23:35.811 } 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "method": "bdev_nvme_set_hotplug", 00:23:35.811 "params": { 00:23:35.811 "period_us": 100000, 00:23:35.811 "enable": false 00:23:35.811 } 00:23:35.811 }, 00:23:35.811 { 00:23:35.811 "method": "bdev_malloc_create", 00:23:35.811 "params": { 00:23:35.811 "name": "malloc0", 00:23:35.811 "num_blocks": 8192, 00:23:35.812 "block_size": 4096, 00:23:35.812 "physical_block_size": 4096, 00:23:35.812 "uuid": "22bee2b4-3d65-4ed4-b8f2-3c89c6718a97", 00:23:35.812 "optimal_io_boundary": 0 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "bdev_wait_for_examine" 00:23:35.812 } 00:23:35.812 ] 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "subsystem": "nbd", 00:23:35.812 "config": [] 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "subsystem": "scheduler", 00:23:35.812 "config": [ 00:23:35.812 { 00:23:35.812 "method": "framework_set_scheduler", 00:23:35.812 "params": { 00:23:35.812 "name": "static" 00:23:35.812 } 00:23:35.812 } 00:23:35.812 ] 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "subsystem": "nvmf", 00:23:35.812 "config": [ 00:23:35.812 { 00:23:35.812 "method": "nvmf_set_config", 00:23:35.812 "params": { 00:23:35.812 "discovery_filter": "match_any", 00:23:35.812 "admin_cmd_passthru": { 00:23:35.812 "identify_ctrlr": false 00:23:35.812 } 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_set_max_subsystems", 00:23:35.812 "params": { 00:23:35.812 "max_subsystems": 1024 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_set_crdt", 00:23:35.812 "params": { 00:23:35.812 "crdt1": 0, 00:23:35.812 "crdt2": 0, 00:23:35.812 "crdt3": 0 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_create_transport", 00:23:35.812 "params": { 00:23:35.812 "trtype": "TCP", 00:23:35.812 "max_queue_depth": 128, 00:23:35.812 "max_io_qpairs_per_ctrlr": 127, 00:23:35.812 "in_capsule_data_size": 4096, 00:23:35.812 "max_io_size": 131072, 00:23:35.812 "io_unit_size": 131072, 00:23:35.812 "max_aq_depth": 128, 00:23:35.812 "num_shared_buffers": 511, 00:23:35.812 "buf_cache_size": 4294967295, 00:23:35.812 "dif_insert_or_strip": false, 00:23:35.812 "zcopy": false, 00:23:35.812 "c2h_success": false, 00:23:35.812 "sock_priority": 0, 00:23:35.812 "abort_timeout_sec": 1, 00:23:35.812 "ack_timeout": 0, 00:23:35.812 "data_wr_pool_size": 0 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_create_subsystem", 00:23:35.812 "params": { 00:23:35.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.812 "allow_any_host": false, 00:23:35.812 "serial_number": "00000000000000000000", 00:23:35.812 "model_number": "SPDK bdev Controller", 00:23:35.812 "max_namespaces": 32, 00:23:35.812 "min_cntlid": 1, 00:23:35.812 "max_cntlid": 65519, 00:23:35.812 "ana_reporting": false 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_subsystem_add_host", 00:23:35.812 "params": { 00:23:35.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.812 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.812 "psk": "key0" 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_subsystem_add_ns", 00:23:35.812 "params": { 00:23:35.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.812 "namespace": { 00:23:35.812 "nsid": 1, 00:23:35.812 "bdev_name": "malloc0", 00:23:35.812 "nguid": "22BEE2B43D654ED4B8F23C89C6718A97", 00:23:35.812 "uuid": "22bee2b4-3d65-4ed4-b8f2-3c89c6718a97", 00:23:35.812 "no_auto_visible": false 00:23:35.812 } 00:23:35.812 } 00:23:35.812 }, 00:23:35.812 { 00:23:35.812 "method": "nvmf_subsystem_add_listener", 00:23:35.812 "params": { 00:23:35.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.812 "listen_address": { 00:23:35.812 "trtype": "TCP", 00:23:35.812 "adrfam": "IPv4", 00:23:35.812 "traddr": "10.0.0.2", 00:23:35.812 "trsvcid": "4420" 00:23:35.812 }, 00:23:35.812 "secure_channel": true 00:23:35.812 } 00:23:35.812 } 00:23:35.812 ] 00:23:35.812 } 00:23:35.812 ] 00:23:35.812 }' 00:23:35.812 16:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:36.070 16:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:36.070 "subsystems": [ 00:23:36.070 { 00:23:36.070 "subsystem": "keyring", 00:23:36.070 "config": [ 00:23:36.070 { 00:23:36.070 "method": "keyring_file_add_key", 00:23:36.070 "params": { 00:23:36.070 "name": "key0", 00:23:36.070 "path": "/tmp/tmp.ySg2DgbvoM" 00:23:36.070 } 00:23:36.070 } 00:23:36.070 ] 00:23:36.070 }, 00:23:36.070 { 00:23:36.070 "subsystem": "iobuf", 00:23:36.070 "config": [ 00:23:36.070 { 00:23:36.070 "method": "iobuf_set_options", 00:23:36.070 "params": { 00:23:36.070 "small_pool_count": 8192, 00:23:36.070 "large_pool_count": 1024, 00:23:36.070 "small_bufsize": 8192, 00:23:36.070 "large_bufsize": 135168 00:23:36.070 } 00:23:36.070 } 00:23:36.070 ] 00:23:36.070 }, 00:23:36.070 { 00:23:36.070 "subsystem": "sock", 00:23:36.070 "config": [ 00:23:36.070 { 00:23:36.070 "method": "sock_impl_set_options", 00:23:36.070 "params": { 00:23:36.070 "impl_name": "posix", 00:23:36.070 "recv_buf_size": 2097152, 00:23:36.070 "send_buf_size": 2097152, 00:23:36.070 "enable_recv_pipe": true, 00:23:36.070 "enable_quickack": false, 00:23:36.070 "enable_placement_id": 0, 00:23:36.070 "enable_zerocopy_send_server": true, 00:23:36.070 "enable_zerocopy_send_client": false, 00:23:36.070 "zerocopy_threshold": 0, 00:23:36.070 "tls_version": 0, 00:23:36.070 "enable_ktls": false 00:23:36.070 } 00:23:36.070 }, 00:23:36.070 { 00:23:36.070 "method": "sock_impl_set_options", 00:23:36.070 "params": { 00:23:36.070 "impl_name": "ssl", 00:23:36.070 "recv_buf_size": 4096, 00:23:36.070 "send_buf_size": 4096, 00:23:36.070 "enable_recv_pipe": true, 00:23:36.070 "enable_quickack": false, 00:23:36.070 "enable_placement_id": 0, 00:23:36.070 "enable_zerocopy_send_server": true, 00:23:36.071 "enable_zerocopy_send_client": false, 00:23:36.071 "zerocopy_threshold": 0, 00:23:36.071 "tls_version": 0, 00:23:36.071 "enable_ktls": false 00:23:36.071 } 00:23:36.071 } 00:23:36.071 ] 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "subsystem": "vmd", 00:23:36.071 "config": [] 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "subsystem": "accel", 00:23:36.071 "config": [ 00:23:36.071 { 00:23:36.071 "method": "accel_set_options", 00:23:36.071 "params": { 00:23:36.071 "small_cache_size": 128, 00:23:36.071 "large_cache_size": 16, 00:23:36.071 "task_count": 2048, 00:23:36.071 "sequence_count": 2048, 00:23:36.071 "buf_count": 2048 00:23:36.071 } 00:23:36.071 } 00:23:36.071 ] 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "subsystem": "bdev", 00:23:36.071 "config": [ 00:23:36.071 { 00:23:36.071 "method": "bdev_set_options", 00:23:36.071 "params": { 00:23:36.071 "bdev_io_pool_size": 65535, 00:23:36.071 "bdev_io_cache_size": 256, 00:23:36.071 "bdev_auto_examine": true, 00:23:36.071 "iobuf_small_cache_size": 128, 00:23:36.071 "iobuf_large_cache_size": 16 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_raid_set_options", 00:23:36.071 "params": { 00:23:36.071 "process_window_size_kb": 1024 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_iscsi_set_options", 00:23:36.071 "params": { 00:23:36.071 "timeout_sec": 30 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_nvme_set_options", 00:23:36.071 "params": { 00:23:36.071 "action_on_timeout": "none", 00:23:36.071 "timeout_us": 0, 00:23:36.071 "timeout_admin_us": 0, 00:23:36.071 "keep_alive_timeout_ms": 10000, 00:23:36.071 "arbitration_burst": 0, 00:23:36.071 "low_priority_weight": 0, 00:23:36.071 "medium_priority_weight": 0, 00:23:36.071 "high_priority_weight": 0, 00:23:36.071 "nvme_adminq_poll_period_us": 10000, 00:23:36.071 "nvme_ioq_poll_period_us": 0, 00:23:36.071 "io_queue_requests": 512, 00:23:36.071 "delay_cmd_submit": true, 00:23:36.071 "transport_retry_count": 4, 00:23:36.071 "bdev_retry_count": 3, 00:23:36.071 "transport_ack_timeout": 0, 00:23:36.071 "ctrlr_loss_timeout_sec": 0, 00:23:36.071 "reconnect_delay_sec": 0, 00:23:36.071 "fast_io_fail_timeout_sec": 0, 00:23:36.071 "disable_auto_failback": false, 00:23:36.071 "generate_uuids": false, 00:23:36.071 "transport_tos": 0, 00:23:36.071 "nvme_error_stat": false, 00:23:36.071 "rdma_srq_size": 0, 00:23:36.071 "io_path_stat": false, 00:23:36.071 "allow_accel_sequence": false, 00:23:36.071 "rdma_max_cq_size": 0, 00:23:36.071 "rdma_cm_event_timeout_ms": 0, 00:23:36.071 "dhchap_digests": [ 00:23:36.071 "sha256", 00:23:36.071 "sha384", 00:23:36.071 "sha512" 00:23:36.071 ], 00:23:36.071 "dhchap_dhgroups": [ 00:23:36.071 "null", 00:23:36.071 "ffdhe2048", 00:23:36.071 "ffdhe3072", 00:23:36.071 "ffdhe4096", 00:23:36.071 "ffdhe6144", 00:23:36.071 "ffdhe8192" 00:23:36.071 ] 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_nvme_attach_controller", 00:23:36.071 "params": { 00:23:36.071 "name": "nvme0", 00:23:36.071 "trtype": "TCP", 00:23:36.071 "adrfam": "IPv4", 00:23:36.071 "traddr": "10.0.0.2", 00:23:36.071 "trsvcid": "4420", 00:23:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.071 "prchk_reftag": false, 00:23:36.071 "prchk_guard": false, 00:23:36.071 "ctrlr_loss_timeout_sec": 0, 00:23:36.071 "reconnect_delay_sec": 0, 00:23:36.071 "fast_io_fail_timeout_sec": 0, 00:23:36.071 "psk": "key0", 00:23:36.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.071 "hdgst": false, 00:23:36.071 "ddgst": false 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_nvme_set_hotplug", 00:23:36.071 "params": { 00:23:36.071 "period_us": 100000, 00:23:36.071 "enable": false 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_enable_histogram", 00:23:36.071 "params": { 00:23:36.071 "name": "nvme0n1", 00:23:36.071 "enable": true 00:23:36.071 } 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "method": "bdev_wait_for_examine" 00:23:36.071 } 00:23:36.071 ] 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "subsystem": "nbd", 00:23:36.071 "config": [] 00:23:36.071 } 00:23:36.071 ] 00:23:36.071 }' 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1820349 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1820349 ']' 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1820349 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1820349 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1820349' 00:23:36.071 killing process with pid 1820349 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1820349 00:23:36.071 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.071 00:23:36.071 Latency(us) 00:23:36.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.071 =================================================================================================================== 00:23:36.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.071 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1820349 00:23:36.328 16:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1820324 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1820324 ']' 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1820324 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1820324 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1820324' 00:23:36.329 killing process with pid 1820324 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1820324 00:23:36.329 [2024-05-15 16:44:43.466453] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:36.329 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1820324 00:23:36.587 16:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:36.587 16:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:36.587 "subsystems": [ 00:23:36.587 { 00:23:36.587 "subsystem": "keyring", 00:23:36.587 "config": [ 00:23:36.587 { 00:23:36.587 "method": "keyring_file_add_key", 00:23:36.587 "params": { 00:23:36.587 "name": "key0", 00:23:36.587 "path": "/tmp/tmp.ySg2DgbvoM" 00:23:36.587 } 00:23:36.587 } 00:23:36.587 ] 00:23:36.587 }, 00:23:36.587 { 00:23:36.587 "subsystem": "iobuf", 00:23:36.587 "config": [ 00:23:36.587 { 00:23:36.587 "method": "iobuf_set_options", 00:23:36.587 "params": { 00:23:36.587 "small_pool_count": 8192, 00:23:36.587 "large_pool_count": 1024, 00:23:36.587 "small_bufsize": 8192, 00:23:36.587 "large_bufsize": 135168 00:23:36.587 } 00:23:36.587 } 00:23:36.587 ] 00:23:36.587 }, 00:23:36.587 { 00:23:36.587 "subsystem": "sock", 00:23:36.587 "config": [ 00:23:36.587 { 00:23:36.587 "method": "sock_impl_set_options", 00:23:36.587 "params": { 00:23:36.587 "impl_name": "posix", 00:23:36.587 "recv_buf_size": 2097152, 00:23:36.587 "send_buf_size": 2097152, 00:23:36.587 "enable_recv_pipe": true, 00:23:36.587 "enable_quickack": false, 00:23:36.587 "enable_placement_id": 0, 00:23:36.587 "enable_zerocopy_send_server": true, 00:23:36.587 "enable_zerocopy_send_client": false, 00:23:36.587 "zerocopy_threshold": 0, 00:23:36.587 "tls_version": 0, 00:23:36.587 "enable_ktls": false 00:23:36.587 } 00:23:36.587 }, 00:23:36.587 { 00:23:36.587 "method": "sock_impl_set_options", 00:23:36.587 "params": { 00:23:36.587 "impl_name": "ssl", 00:23:36.587 "recv_buf_size": 4096, 00:23:36.587 "send_buf_size": 4096, 00:23:36.587 "enable_recv_pipe": true, 00:23:36.587 "enable_quickack": false, 00:23:36.587 "enable_placement_id": 0, 00:23:36.587 "enable_zerocopy_send_server": true, 00:23:36.587 "enable_zerocopy_send_client": false, 00:23:36.587 "zerocopy_threshold": 0, 00:23:36.587 "tls_version": 0, 00:23:36.587 "enable_ktls": false 00:23:36.587 } 00:23:36.587 } 00:23:36.587 ] 00:23:36.587 }, 00:23:36.587 { 00:23:36.587 "subsystem": "vmd", 00:23:36.587 "config": [] 00:23:36.587 }, 00:23:36.587 { 00:23:36.587 "subsystem": "accel", 00:23:36.587 "config": [ 00:23:36.587 { 00:23:36.587 "method": "accel_set_options", 00:23:36.588 "params": { 00:23:36.588 "small_cache_size": 128, 00:23:36.588 "large_cache_size": 16, 00:23:36.588 "task_count": 2048, 00:23:36.588 "sequence_count": 2048, 00:23:36.588 "buf_count": 2048 00:23:36.588 } 00:23:36.588 } 00:23:36.588 ] 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "subsystem": "bdev", 00:23:36.588 "config": [ 00:23:36.588 { 00:23:36.588 "method": "bdev_set_options", 00:23:36.588 "params": { 00:23:36.588 "bdev_io_pool_size": 65535, 00:23:36.588 "bdev_io_cache_size": 256, 00:23:36.588 "bdev_auto_examine": true, 00:23:36.588 "iobuf_small_cache_size": 128, 00:23:36.588 "iobuf_large_cache_size": 16 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "bdev_raid_set_options", 00:23:36.588 "params": { 00:23:36.588 "process_window_size_kb": 1024 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "bdev_iscsi_set_options", 00:23:36.588 "params": { 00:23:36.588 "timeout_sec": 30 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "bdev_nvme_set_options", 00:23:36.588 "params": { 00:23:36.588 "action_on_timeout": "none", 00:23:36.588 "timeout_us": 0, 00:23:36.588 "timeout_admin_us": 0, 00:23:36.588 "keep_alive_timeout_ms": 10000, 00:23:36.588 "arbitration_burst": 0, 00:23:36.588 "low_priority_weight": 0, 00:23:36.588 "medium_priority_weight": 0, 00:23:36.588 "high_priority_weight": 0, 00:23:36.588 "nvme_adminq_poll_period_us": 10000, 00:23:36.588 "nvme_ioq_poll_period_us": 0, 00:23:36.588 "io_queue_requests": 0, 00:23:36.588 "delay_cmd_submit": true, 00:23:36.588 "transport_retry_count": 4, 00:23:36.588 "bdev_retry_count": 3, 00:23:36.588 "transport_ack_timeout": 0, 00:23:36.588 "ctrlr_loss_timeout_sec": 0, 00:23:36.588 "reconnect_delay_sec": 0, 00:23:36.588 "fast_io_fail_timeout_sec": 0, 00:23:36.588 "disable_auto_failback": false, 00:23:36.588 "generate_uuids": false, 00:23:36.588 "transport_tos": 0, 00:23:36.588 "nvme_error_stat": false, 00:23:36.588 "rdma_srq_size": 0, 00:23:36.588 "io_path_stat": false, 00:23:36.588 "allow_accel_sequence": false, 00:23:36.588 "rdma_max_cq_size": 0, 00:23:36.588 "rdma_cm_event_timeout_ms": 0, 00:23:36.588 "dhchap_digests": [ 00:23:36.588 "sha256", 00:23:36.588 "sha384", 00:23:36.588 "sha512" 00:23:36.588 ], 00:23:36.588 "dhchap_dhgroups": [ 00:23:36.588 "null", 00:23:36.588 "ffdhe2048", 00:23:36.588 "ffdhe3072", 00:23:36.588 "ffdhe4096", 00:23:36.588 "ffdhe6144", 00:23:36.588 "ffdhe8192" 00:23:36.588 ] 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "bdev_nvme_set_hotplug", 00:23:36.588 "params": { 00:23:36.588 "period_us": 100000, 00:23:36.588 "enable": false 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "bdev_malloc_create", 00:23:36.588 "params": { 00:23:36.588 "name": "malloc0", 00:23:36.588 "num_blocks": 8192, 00:23:36.588 "block_size": 4096, 00:23:36.588 "physical_block_size": 4096, 00:23:36.588 "uuid": "22bee2b4-3d65-4ed4-b8f2-3c89c6718a97", 00:23:36.588 "optimal_io_boundary": 0 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "bdev_wait_for_examine" 00:23:36.588 } 00:23:36.588 ] 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "subsystem": "nbd", 00:23:36.588 "config": [] 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "subsystem": "scheduler", 00:23:36.588 "config": [ 00:23:36.588 { 00:23:36.588 "method": "framework_set_scheduler", 00:23:36.588 "params": { 00:23:36.588 "name": "static" 00:23:36.588 } 00:23:36.588 } 00:23:36.588 ] 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "subsystem": "nvmf", 00:23:36.588 "config": [ 00:23:36.588 { 00:23:36.588 "method": "nvmf_set_config", 00:23:36.588 "params": { 00:23:36.588 "discovery_filter": "match_any", 00:23:36.588 "admin_cmd_passthru": { 00:23:36.588 "identify_ctrlr": false 00:23:36.588 } 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_set_max_subsystems", 00:23:36.588 "params": { 00:23:36.588 "max_subsystems": 1024 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_set_crdt", 00:23:36.588 "params": { 00:23:36.588 "crdt1": 0, 00:23:36.588 "crdt2": 0, 00:23:36.588 "crdt3": 0 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_create_transport", 00:23:36.588 "params": { 00:23:36.588 "trtype": "TCP", 00:23:36.588 "max_queue_depth": 128, 00:23:36.588 "max_io_qpairs_per_ctrlr": 127, 00:23:36.588 "in_capsule_data_size": 4096, 00:23:36.588 "max_io_size": 131072, 00:23:36.588 "io_unit_size": 131072, 00:23:36.588 "max_aq_depth": 128, 00:23:36.588 "num_shared_buffers": 511, 00:23:36.588 "buf_cache_size": 4294967295, 00:23:36.588 "dif_insert_or_strip": false, 00:23:36.588 "zcopy": false, 00:23:36.588 "c2h_success": false, 00:23:36.588 "sock_priority": 0, 00:23:36.588 "abort_timeout_sec": 1, 00:23:36.588 "ack_timeout": 0, 00:23:36.588 "data_wr_pool_size": 0 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_create_subsystem", 00:23:36.588 "params": { 00:23:36.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.588 "allow_any_host": false, 00:23:36.588 "serial_number": "00000000000000000000", 00:23:36.588 "model_n 16:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.588 umber": "SPDK bdev Controller", 00:23:36.588 "max_namespaces": 32, 00:23:36.588 "min_cntlid": 1, 00:23:36.588 "max_cntlid": 65519, 00:23:36.588 "ana_reporting": false 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_subsystem_add_host", 00:23:36.588 "params": { 00:23:36.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.588 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.588 "psk": "key0" 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_subsystem_add_ns", 00:23:36.588 "params": { 00:23:36.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.588 "namespace": { 00:23:36.588 "nsid": 1, 00:23:36.588 "bdev_name": "malloc0", 00:23:36.588 "nguid": "22BEE2B43D654ED4B8F23C89C6718A97", 00:23:36.588 "uuid": "22bee2b4-3d65-4ed4-b8f2-3c89c6718a97", 00:23:36.588 "no_auto_visible": false 00:23:36.588 } 00:23:36.588 } 00:23:36.588 }, 00:23:36.588 { 00:23:36.588 "method": "nvmf_subsystem_add_listener", 00:23:36.588 "params": { 00:23:36.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.588 "listen_address": { 00:23:36.588 "trtype": "TCP", 00:23:36.588 "adrfam": "IPv4", 00:23:36.588 "traddr": "10.0.0.2", 00:23:36.588 "trsvcid": "4420" 00:23:36.588 }, 00:23:36.588 "secure_channel": true 00:23:36.588 } 00:23:36.588 } 00:23:36.588 ] 00:23:36.588 } 00:23:36.588 ] 00:23:36.588 }' 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1820754 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1820754 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1820754 ']' 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.588 16:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.588 [2024-05-15 16:44:43.740768] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:36.588 [2024-05-15 16:44:43.740841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.588 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.846 [2024-05-15 16:44:43.817474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.846 [2024-05-15 16:44:43.904183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.846 [2024-05-15 16:44:43.904255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.846 [2024-05-15 16:44:43.904287] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.846 [2024-05-15 16:44:43.904308] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.846 [2024-05-15 16:44:43.904319] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.846 [2024-05-15 16:44:43.904395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.104 [2024-05-15 16:44:44.135181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.104 [2024-05-15 16:44:44.167152] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:37.104 [2024-05-15 16:44:44.167232] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.104 [2024-05-15 16:44:44.178444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1820902 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1820902 /var/tmp/bdevperf.sock 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1820902 ']' 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.669 16:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:37.669 "subsystems": [ 00:23:37.669 { 00:23:37.669 "subsystem": "keyring", 00:23:37.669 "config": [ 00:23:37.669 { 00:23:37.669 "method": "keyring_file_add_key", 00:23:37.669 "params": { 00:23:37.669 "name": "key0", 00:23:37.669 "path": "/tmp/tmp.ySg2DgbvoM" 00:23:37.669 } 00:23:37.669 } 00:23:37.669 ] 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "subsystem": "iobuf", 00:23:37.669 "config": [ 00:23:37.669 { 00:23:37.669 "method": "iobuf_set_options", 00:23:37.669 "params": { 00:23:37.669 "small_pool_count": 8192, 00:23:37.669 "large_pool_count": 1024, 00:23:37.669 "small_bufsize": 8192, 00:23:37.669 "large_bufsize": 135168 00:23:37.669 } 00:23:37.669 } 00:23:37.669 ] 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "subsystem": "sock", 00:23:37.669 "config": [ 00:23:37.669 { 00:23:37.669 "method": "sock_impl_set_options", 00:23:37.669 "params": { 00:23:37.669 "impl_name": "posix", 00:23:37.669 "recv_buf_size": 2097152, 00:23:37.669 "send_buf_size": 2097152, 00:23:37.669 "enable_recv_pipe": true, 00:23:37.669 "enable_quickack": false, 00:23:37.669 "enable_placement_id": 0, 00:23:37.669 "enable_zerocopy_send_server": true, 00:23:37.669 "enable_zerocopy_send_client": false, 00:23:37.669 "zerocopy_threshold": 0, 00:23:37.669 "tls_version": 0, 00:23:37.669 "enable_ktls": false 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "sock_impl_set_options", 00:23:37.669 "params": { 00:23:37.669 "impl_name": "ssl", 00:23:37.669 "recv_buf_size": 4096, 00:23:37.669 "send_buf_size": 4096, 00:23:37.669 "enable_recv_pipe": true, 00:23:37.669 "enable_quickack": false, 00:23:37.669 "enable_placement_id": 0, 00:23:37.669 "enable_zerocopy_send_server": true, 00:23:37.669 "enable_zerocopy_send_client": false, 00:23:37.669 "zerocopy_threshold": 0, 00:23:37.669 "tls_version": 0, 00:23:37.669 "enable_ktls": false 00:23:37.669 } 00:23:37.669 } 00:23:37.669 ] 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "subsystem": "vmd", 00:23:37.669 "config": [] 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "subsystem": "accel", 00:23:37.669 "config": [ 00:23:37.669 { 00:23:37.669 "method": "accel_set_options", 00:23:37.669 "params": { 00:23:37.669 "small_cache_size": 128, 00:23:37.669 "large_cache_size": 16, 00:23:37.669 "task_count": 2048, 00:23:37.669 "sequence_count": 2048, 00:23:37.669 "buf_count": 2048 00:23:37.669 } 00:23:37.669 } 00:23:37.669 ] 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "subsystem": "bdev", 00:23:37.669 "config": [ 00:23:37.669 { 00:23:37.669 "method": "bdev_set_options", 00:23:37.669 "params": { 00:23:37.669 "bdev_io_pool_size": 65535, 00:23:37.669 "bdev_io_cache_size": 256, 00:23:37.669 "bdev_auto_examine": true, 00:23:37.669 "iobuf_small_cache_size": 128, 00:23:37.669 "iobuf_large_cache_size": 16 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_raid_set_options", 00:23:37.669 "params": { 00:23:37.669 "process_window_size_kb": 1024 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_iscsi_set_options", 00:23:37.669 "params": { 00:23:37.669 "timeout_sec": 30 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_nvme_set_options", 00:23:37.669 "params": { 00:23:37.669 "action_on_timeout": "none", 00:23:37.669 "timeout_us": 0, 00:23:37.669 "timeout_admin_us": 0, 00:23:37.669 "keep_alive_timeout_ms": 10000, 00:23:37.669 "arbitration_burst": 0, 00:23:37.669 "low_priority_weight": 0, 00:23:37.669 "medium_priority_weight": 0, 00:23:37.669 "high_priority_weight": 0, 00:23:37.669 "nvme_adminq_poll_period_us": 10000, 00:23:37.669 "nvme_ioq_poll_period_us": 0, 00:23:37.669 "io_queue_requests": 512, 00:23:37.669 "delay_cmd_submit": true, 00:23:37.669 "transport_retry_count": 4, 00:23:37.669 "bdev_retry_count": 3, 00:23:37.669 "transport_ack_timeout": 0, 00:23:37.669 "ctrlr_loss_timeout_sec": 0, 00:23:37.669 "reconnect_delay_sec": 0, 00:23:37.669 "fast_io_fail_timeout_sec": 0, 00:23:37.669 "disable_auto_failback": false, 00:23:37.669 "generate_uuids": false, 00:23:37.669 "transport_tos": 0, 00:23:37.669 "nvme_error_stat": false, 00:23:37.669 "rdma_srq_size": 0, 00:23:37.669 "io_path_stat": false, 00:23:37.669 "allow_accel_sequence": false, 00:23:37.669 "rdma_max_cq_size": 0, 00:23:37.669 "rdma_cm_event_timeout_ms": 0, 00:23:37.669 "dhchap_digests": [ 00:23:37.669 "sha256", 00:23:37.669 "sha384", 00:23:37.669 "sha512" 00:23:37.669 ], 00:23:37.669 "dhchap_dhgroups": [ 00:23:37.669 "null", 00:23:37.669 "ffdhe2048", 00:23:37.669 "ffdhe3072", 00:23:37.669 "ffdhe4096", 00:23:37.669 "ffdhe6144", 00:23:37.669 "ffdhe8192" 00:23:37.669 ] 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_nvme_attach_controller", 00:23:37.669 "params": { 00:23:37.669 "name": "nvme0", 00:23:37.669 "trtype": "TCP", 00:23:37.669 "adrfam": "IPv4", 00:23:37.669 "traddr": "10.0.0.2", 00:23:37.669 "trsvcid": "4420", 00:23:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.669 "prchk_reftag": false, 00:23:37.669 "prchk_guard": false, 00:23:37.669 "ctrlr_loss_timeout_sec": 0, 00:23:37.669 "reconnect_delay_sec": 0, 00:23:37.669 "fast_io_fail_timeout_sec": 0, 00:23:37.669 "psk": "key0", 00:23:37.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.669 "hdgst": false, 00:23:37.669 "ddgst": false 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_nvme_set_hotplug", 00:23:37.669 "params": { 00:23:37.669 "period_us": 100000, 00:23:37.669 "enable": false 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_enable_histogram", 00:23:37.669 "params": { 00:23:37.669 "name": "nvme0n1", 00:23:37.669 "enable": true 00:23:37.669 } 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "method": "bdev_wait_for_examine" 00:23:37.669 } 00:23:37.669 ] 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "subsystem": "nbd", 00:23:37.670 "config": [] 00:23:37.670 } 00:23:37.670 ] 00:23:37.670 }' 00:23:37.670 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.670 16:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.670 [2024-05-15 16:44:44.820006] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:37.670 [2024-05-15 16:44:44.820082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820902 ] 00:23:37.670 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.670 [2024-05-15 16:44:44.891236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.927 [2024-05-15 16:44:44.978547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.927 [2024-05-15 16:44:45.146691] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.860 16:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.860 16:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:38.860 16:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.860 16:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:38.860 16:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.860 16:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.118 Running I/O for 1 seconds... 00:23:40.050 00:23:40.050 Latency(us) 00:23:40.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.050 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:40.050 Verification LBA range: start 0x0 length 0x2000 00:23:40.050 nvme0n1 : 1.03 3254.35 12.71 0.00 0.00 38803.21 9417.77 51652.08 00:23:40.050 =================================================================================================================== 00:23:40.050 Total : 3254.35 12.71 0.00 0.00 38803.21 9417.77 51652.08 00:23:40.050 0 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:40.050 nvmf_trace.0 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1820902 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1820902 ']' 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1820902 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1820902 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1820902' 00:23:40.050 killing process with pid 1820902 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1820902 00:23:40.050 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.050 00:23:40.050 Latency(us) 00:23:40.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.050 =================================================================================================================== 00:23:40.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.050 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1820902 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.308 rmmod nvme_tcp 00:23:40.308 rmmod nvme_fabrics 00:23:40.308 rmmod nvme_keyring 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1820754 ']' 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1820754 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1820754 ']' 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1820754 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.308 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1820754 00:23:40.566 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:40.566 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:40.566 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1820754' 00:23:40.566 killing process with pid 1820754 00:23:40.566 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1820754 00:23:40.566 [2024-05-15 16:44:47.551499] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:40.566 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1820754 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.826 16:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.729 16:44:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.729 16:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.H4vn06hLG5 /tmp/tmp.tv9o6ZveVB /tmp/tmp.ySg2DgbvoM 00:23:42.729 00:23:42.729 real 1m19.618s 00:23:42.729 user 2m2.125s 00:23:42.729 sys 0m26.817s 00:23:42.729 16:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:42.729 16:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.729 ************************************ 00:23:42.729 END TEST nvmf_tls 00:23:42.729 ************************************ 00:23:42.729 16:44:49 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:42.729 16:44:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:42.729 16:44:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:42.729 16:44:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.729 ************************************ 00:23:42.729 START TEST nvmf_fips 00:23:42.729 ************************************ 00:23:42.729 16:44:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:42.729 * Looking for test storage... 00:23:43.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.014 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:43.015 16:44:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:43.015 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:43.015 Error setting digest 00:23:43.016 008248D9547F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:43.016 008248D9547F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.016 16:44:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:45.542 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.542 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:45.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:45.543 Found net devices under 0000:09:00.0: cvl_0_0 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:45.543 Found net devices under 0000:09:00.1: cvl_0_1 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:23:45.543 00:23:45.543 --- 10.0.0.2 ping statistics --- 00:23:45.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.543 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:45.543 00:23:45.543 --- 10.0.0.1 ping statistics --- 00:23:45.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.543 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1823562 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1823562 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1823562 ']' 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:45.543 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.543 [2024-05-15 16:44:52.704393] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:45.543 [2024-05-15 16:44:52.704468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.543 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.801 [2024-05-15 16:44:52.777877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.801 [2024-05-15 16:44:52.859363] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.801 [2024-05-15 16:44:52.859415] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.801 [2024-05-15 16:44:52.859446] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.801 [2024-05-15 16:44:52.859460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.801 [2024-05-15 16:44:52.859470] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.801 [2024-05-15 16:44:52.859504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.801 16:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.059 [2024-05-15 16:44:53.243347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.059 [2024-05-15 16:44:53.259290] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:46.059 [2024-05-15 16:44:53.259356] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.059 [2024-05-15 16:44:53.259603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.316 [2024-05-15 16:44:53.291775] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:46.316 malloc0 00:23:46.316 16:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.316 16:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1823595 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1823595 /var/tmp/bdevperf.sock 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1823595 ']' 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:46.317 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:46.317 [2024-05-15 16:44:53.381780] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:23:46.317 [2024-05-15 16:44:53.381859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1823595 ] 00:23:46.317 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.317 [2024-05-15 16:44:53.446768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.317 [2024-05-15 16:44:53.526628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.604 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:46.604 16:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:46.604 16:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:46.862 [2024-05-15 16:44:53.867622] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.862 [2024-05-15 16:44:53.867765] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:46.862 TLSTESTn1 00:23:46.862 16:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:46.862 Running I/O for 10 seconds... 00:23:59.052 00:23:59.052 Latency(us) 00:23:59.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:59.052 Verification LBA range: start 0x0 length 0x2000 00:23:59.052 TLSTESTn1 : 10.03 2450.97 9.57 0.00 0.00 52127.43 11942.12 63691.28 00:23:59.052 =================================================================================================================== 00:23:59.052 Total : 2450.97 9.57 0.00 0.00 52127.43 11942.12 63691.28 00:23:59.052 0 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:59.052 nvmf_trace.0 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1823595 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1823595 ']' 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1823595 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:59.052 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1823595 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1823595' 00:23:59.053 killing process with pid 1823595 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1823595 00:23:59.053 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.053 00:23:59.053 Latency(us) 00:23:59.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.053 =================================================================================================================== 00:23:59.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.053 [2024-05-15 16:45:04.247119] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1823595 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.053 rmmod nvme_tcp 00:23:59.053 rmmod nvme_fabrics 00:23:59.053 rmmod nvme_keyring 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1823562 ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1823562 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1823562 ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1823562 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1823562 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1823562' 00:23:59.053 killing process with pid 1823562 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1823562 00:23:59.053 [2024-05-15 16:45:04.571559] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:59.053 [2024-05-15 16:45:04.571615] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1823562 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.053 16:45:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.988 00:23:59.988 real 0m16.961s 00:23:59.988 user 0m17.621s 00:23:59.988 sys 0m7.828s 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.988 ************************************ 00:23:59.988 END TEST nvmf_fips 00:23:59.988 ************************************ 00:23:59.988 16:45:06 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:59.988 16:45:06 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:59.988 16:45:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:59.988 16:45:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:59.988 16:45:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.988 ************************************ 00:23:59.988 START TEST nvmf_fuzz 00:23:59.988 ************************************ 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:59.988 * Looking for test storage... 00:23:59.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.988 16:45:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:02.517 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:02.517 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:02.517 Found net devices under 0000:09:00.0: cvl_0_0 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:02.517 Found net devices under 0000:09:00.1: cvl_0_1 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:02.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:24:02.517 00:24:02.517 --- 10.0.0.2 ping statistics --- 00:24:02.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.517 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:02.517 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:02.517 00:24:02.517 --- 10.0.0.1 ping statistics --- 00:24:02.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.517 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1827739 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1827739 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1827739 ']' 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:02.518 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.776 Malloc0 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:02.776 16:45:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:34.835 Fuzzing completed. Shutting down the fuzz application 00:24:34.835 00:24:34.835 Dumping successful admin opcodes: 00:24:34.836 8, 9, 10, 24, 00:24:34.836 Dumping successful io opcodes: 00:24:34.836 0, 9, 00:24:34.836 NS: 0x200003aeff00 I/O qp, Total commands completed: 439300, total successful commands: 2562, random_seed: 1852013376 00:24:34.836 NS: 0x200003aeff00 admin qp, Total commands completed: 55008, total successful commands: 440, random_seed: 1494963136 00:24:34.836 16:45:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:35.093 Fuzzing completed. Shutting down the fuzz application 00:24:35.093 00:24:35.093 Dumping successful admin opcodes: 00:24:35.093 24, 00:24:35.093 Dumping successful io opcodes: 00:24:35.093 00:24:35.093 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 97673638 00:24:35.093 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 97800350 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.093 rmmod nvme_tcp 00:24:35.093 rmmod nvme_fabrics 00:24:35.093 rmmod nvme_keyring 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1827739 ']' 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1827739 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1827739 ']' 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 1827739 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1827739 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1827739' 00:24:35.093 killing process with pid 1827739 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 1827739 00:24:35.093 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 1827739 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.351 16:45:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.882 16:45:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.882 16:45:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:37.882 00:24:37.882 real 0m37.601s 00:24:37.882 user 0m51.120s 00:24:37.882 sys 0m15.706s 00:24:37.882 16:45:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:37.882 16:45:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.882 ************************************ 00:24:37.882 END TEST nvmf_fuzz 00:24:37.882 ************************************ 00:24:37.882 16:45:44 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:37.882 16:45:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:37.882 16:45:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:37.882 16:45:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.882 ************************************ 00:24:37.882 START TEST nvmf_multiconnection 00:24:37.882 ************************************ 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:37.882 * Looking for test storage... 00:24:37.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.882 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.883 16:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.811 16:45:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:39.811 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:39.811 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.811 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:39.812 Found net devices under 0000:09:00.0: cvl_0_0 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:39.812 Found net devices under 0000:09:00.1: cvl_0_1 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.812 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:40.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:24:40.070 00:24:40.070 --- 10.0.0.2 ping statistics --- 00:24:40.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.070 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:24:40.070 00:24:40.070 --- 10.0.0.1 ping statistics --- 00:24:40.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.070 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1833766 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1833766 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1833766 ']' 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.070 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.070 [2024-05-15 16:45:47.208374] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:24:40.070 [2024-05-15 16:45:47.208454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.070 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.070 [2024-05-15 16:45:47.290084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.328 [2024-05-15 16:45:47.383158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.328 [2024-05-15 16:45:47.383213] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.328 [2024-05-15 16:45:47.383238] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.328 [2024-05-15 16:45:47.383252] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.328 [2024-05-15 16:45:47.383264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.328 [2024-05-15 16:45:47.383321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.328 [2024-05-15 16:45:47.383352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.328 [2024-05-15 16:45:47.383475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.328 [2024-05-15 16:45:47.383782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.328 [2024-05-15 16:45:47.524755] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.328 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 Malloc1 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 [2024-05-15 16:45:47.579725] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:40.586 [2024-05-15 16:45:47.580029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 Malloc2 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:40.586 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 Malloc3 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 Malloc4 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 Malloc5 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 Malloc6 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.587 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 Malloc7 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 Malloc8 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.845 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 Malloc9 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 Malloc10 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 Malloc11 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.846 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:41.410 16:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:41.410 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:41.410 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.410 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:41.410 16:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.988 16:45:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:44.245 16:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:44.245 16:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:44.245 16:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.245 16:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:44.245 16:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.142 16:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:47.075 16:45:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:47.075 16:45:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:47.075 16:45:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.075 16:45:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:47.075 16:45:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.973 16:45:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:49.539 16:45:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:49.539 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:49.539 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.539 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:49.539 16:45:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.064 16:45:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:52.322 16:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:52.322 16:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:52.322 16:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.322 16:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:52.322 16:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.224 16:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:55.156 16:46:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:55.156 16:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.156 16:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.156 16:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.156 16:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.053 16:46:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:57.986 16:46:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:57.986 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:57.987 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.987 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:57.987 16:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.941 16:46:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:00.873 16:46:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:00.873 16:46:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:00.873 16:46:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.873 16:46:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:00.873 16:46:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:02.772 16:46:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.773 16:46:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:03.706 16:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:03.706 16:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:03.706 16:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.706 16:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:03.706 16:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:05.602 16:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:06.536 16:46:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:06.536 16:46:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:06.536 16:46:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.536 16:46:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:06.536 16:46:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:08.432 16:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:08.432 16:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:08.432 16:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:08.432 16:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:08.432 16:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:08.433 16:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:08.433 16:46:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.433 16:46:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:09.366 16:46:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:09.366 16:46:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.366 16:46:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.366 16:46:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.366 16:46:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:11.265 16:46:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:11.265 [global] 00:25:11.265 thread=1 00:25:11.265 invalidate=1 00:25:11.265 rw=read 00:25:11.265 time_based=1 00:25:11.265 runtime=10 00:25:11.265 ioengine=libaio 00:25:11.265 direct=1 00:25:11.265 bs=262144 00:25:11.265 iodepth=64 00:25:11.265 norandommap=1 00:25:11.265 numjobs=1 00:25:11.265 00:25:11.265 [job0] 00:25:11.265 filename=/dev/nvme0n1 00:25:11.265 [job1] 00:25:11.265 filename=/dev/nvme10n1 00:25:11.265 [job2] 00:25:11.265 filename=/dev/nvme1n1 00:25:11.265 [job3] 00:25:11.265 filename=/dev/nvme2n1 00:25:11.265 [job4] 00:25:11.265 filename=/dev/nvme3n1 00:25:11.265 [job5] 00:25:11.265 filename=/dev/nvme4n1 00:25:11.265 [job6] 00:25:11.265 filename=/dev/nvme5n1 00:25:11.265 [job7] 00:25:11.265 filename=/dev/nvme6n1 00:25:11.265 [job8] 00:25:11.265 filename=/dev/nvme7n1 00:25:11.265 [job9] 00:25:11.265 filename=/dev/nvme8n1 00:25:11.265 [job10] 00:25:11.265 filename=/dev/nvme9n1 00:25:11.523 Could not set queue depth (nvme0n1) 00:25:11.523 Could not set queue depth (nvme10n1) 00:25:11.523 Could not set queue depth (nvme1n1) 00:25:11.523 Could not set queue depth (nvme2n1) 00:25:11.523 Could not set queue depth (nvme3n1) 00:25:11.523 Could not set queue depth (nvme4n1) 00:25:11.523 Could not set queue depth (nvme5n1) 00:25:11.523 Could not set queue depth (nvme6n1) 00:25:11.523 Could not set queue depth (nvme7n1) 00:25:11.523 Could not set queue depth (nvme8n1) 00:25:11.523 Could not set queue depth (nvme9n1) 00:25:11.523 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.523 fio-3.35 00:25:11.523 Starting 11 threads 00:25:23.731 00:25:23.731 job0: (groupid=0, jobs=1): err= 0: pid=1838012: Wed May 15 16:46:29 2024 00:25:23.731 read: IOPS=527, BW=132MiB/s (138MB/s)(1328MiB/10066msec) 00:25:23.731 slat (usec): min=14, max=72625, avg=1711.72, stdev=5043.90 00:25:23.731 clat (msec): min=10, max=331, avg=119.48, stdev=43.29 00:25:23.731 lat (msec): min=10, max=331, avg=121.19, stdev=44.04 00:25:23.731 clat percentiles (msec): 00:25:23.731 | 1.00th=[ 34], 5.00th=[ 58], 10.00th=[ 72], 20.00th=[ 83], 00:25:23.731 | 30.00th=[ 91], 40.00th=[ 101], 50.00th=[ 113], 60.00th=[ 131], 00:25:23.731 | 70.00th=[ 144], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 190], 00:25:23.731 | 99.00th=[ 245], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 305], 00:25:23.731 | 99.99th=[ 334] 00:25:23.731 bw ( KiB/s): min=72704, max=206848, per=6.91%, avg=134287.85, stdev=38968.59, samples=20 00:25:23.731 iops : min= 284, max= 808, avg=524.55, stdev=152.23, samples=20 00:25:23.731 lat (msec) : 20=0.26%, 50=2.60%, 100=37.40%, 250=59.00%, 500=0.73% 00:25:23.731 cpu : usr=0.31%, sys=2.09%, ctx=1174, majf=0, minf=3721 00:25:23.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:23.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.731 issued rwts: total=5310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.731 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.731 job1: (groupid=0, jobs=1): err= 0: pid=1838013: Wed May 15 16:46:29 2024 00:25:23.731 read: IOPS=950, BW=238MiB/s (249MB/s)(2405MiB/10126msec) 00:25:23.731 slat (usec): min=13, max=88265, avg=975.95, stdev=2947.89 00:25:23.731 clat (msec): min=3, max=244, avg=66.32, stdev=35.18 00:25:23.731 lat (msec): min=3, max=245, avg=67.29, stdev=35.64 00:25:23.731 clat percentiles (msec): 00:25:23.731 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 33], 00:25:23.731 | 30.00th=[ 36], 40.00th=[ 51], 50.00th=[ 63], 60.00th=[ 71], 00:25:23.731 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 113], 95.00th=[ 136], 00:25:23.731 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 230], 99.95th=[ 245], 00:25:23.731 | 99.99th=[ 245] 00:25:23.731 bw ( KiB/s): min=115200, max=461824, per=12.58%, avg=244653.05, stdev=103668.18, samples=20 00:25:23.731 iops : min= 450, max= 1804, avg=955.65, stdev=404.94, samples=20 00:25:23.731 lat (msec) : 4=0.05%, 10=0.68%, 20=0.51%, 50=38.44%, 100=44.36% 00:25:23.731 lat (msec) : 250=15.97% 00:25:23.731 cpu : usr=0.53%, sys=3.24%, ctx=1875, majf=0, minf=4097 00:25:23.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:23.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.731 issued rwts: total=9620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.731 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.731 job2: (groupid=0, jobs=1): err= 0: pid=1838014: Wed May 15 16:46:29 2024 00:25:23.731 read: IOPS=814, BW=204MiB/s (214MB/s)(2054MiB/10084msec) 00:25:23.732 slat (usec): min=12, max=113214, avg=1012.33, stdev=3718.51 00:25:23.732 clat (msec): min=3, max=303, avg=77.46, stdev=45.95 00:25:23.732 lat (msec): min=3, max=364, avg=78.47, stdev=46.49 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 38], 00:25:23.732 | 30.00th=[ 46], 40.00th=[ 57], 50.00th=[ 66], 60.00th=[ 75], 00:25:23.732 | 70.00th=[ 89], 80.00th=[ 113], 90.00th=[ 146], 95.00th=[ 169], 00:25:23.732 | 99.00th=[ 218], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 305], 00:25:23.732 | 99.99th=[ 305] 00:25:23.732 bw ( KiB/s): min=83968, max=449536, per=10.73%, avg=208656.15, stdev=99971.06, samples=20 00:25:23.732 iops : min= 328, max= 1756, avg=815.05, stdev=390.49, samples=20 00:25:23.732 lat (msec) : 4=0.01%, 10=0.33%, 20=1.44%, 50=31.98%, 100=41.75% 00:25:23.732 lat (msec) : 250=23.69%, 500=0.80% 00:25:23.732 cpu : usr=0.47%, sys=2.69%, ctx=1673, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=8215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job3: (groupid=0, jobs=1): err= 0: pid=1838015: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=603, BW=151MiB/s (158MB/s)(1521MiB/10078msec) 00:25:23.732 slat (usec): min=10, max=86129, avg=1295.62, stdev=4610.66 00:25:23.732 clat (msec): min=3, max=250, avg=104.63, stdev=47.00 00:25:23.732 lat (msec): min=3, max=289, avg=105.93, stdev=47.86 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 41], 20.00th=[ 70], 00:25:23.732 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 102], 60.00th=[ 111], 00:25:23.732 | 70.00th=[ 126], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 184], 00:25:23.732 | 99.00th=[ 201], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 232], 00:25:23.732 | 99.99th=[ 251] 00:25:23.732 bw ( KiB/s): min=88064, max=320512, per=7.93%, avg=154093.45, stdev=56932.81, samples=20 00:25:23.732 iops : min= 344, max= 1252, avg=601.90, stdev=222.38, samples=20 00:25:23.732 lat (msec) : 4=0.07%, 10=0.97%, 20=3.17%, 50=8.22%, 100=36.47% 00:25:23.732 lat (msec) : 250=51.09%, 500=0.02% 00:25:23.732 cpu : usr=0.45%, sys=2.09%, ctx=1327, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=6082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job4: (groupid=0, jobs=1): err= 0: pid=1838016: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=717, BW=179MiB/s (188MB/s)(1809MiB/10082msec) 00:25:23.732 slat (usec): min=14, max=136268, avg=1238.78, stdev=4041.77 00:25:23.732 clat (msec): min=4, max=275, avg=87.85, stdev=50.33 00:25:23.732 lat (msec): min=4, max=398, avg=89.09, stdev=51.07 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 40], 00:25:23.732 | 30.00th=[ 54], 40.00th=[ 64], 50.00th=[ 75], 60.00th=[ 91], 00:25:23.732 | 70.00th=[ 110], 80.00th=[ 138], 90.00th=[ 161], 95.00th=[ 180], 00:25:23.732 | 99.00th=[ 230], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 271], 00:25:23.732 | 99.99th=[ 275] 00:25:23.732 bw ( KiB/s): min=88064, max=403456, per=9.44%, avg=183567.80, stdev=88328.48, samples=20 00:25:23.732 iops : min= 344, max= 1576, avg=717.05, stdev=345.05, samples=20 00:25:23.732 lat (msec) : 10=1.16%, 20=1.37%, 50=24.66%, 100=38.11%, 250=34.14% 00:25:23.732 lat (msec) : 500=0.55% 00:25:23.732 cpu : usr=0.53%, sys=2.47%, ctx=1542, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=7234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job5: (groupid=0, jobs=1): err= 0: pid=1838017: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=775, BW=194MiB/s (203MB/s)(1953MiB/10068msec) 00:25:23.732 slat (usec): min=9, max=52541, avg=944.97, stdev=3154.07 00:25:23.732 clat (usec): min=1468, max=191137, avg=81456.38, stdev=31732.20 00:25:23.732 lat (usec): min=1490, max=204759, avg=82401.35, stdev=31984.65 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 38], 20.00th=[ 54], 00:25:23.732 | 30.00th=[ 67], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 91], 00:25:23.732 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 118], 95.00th=[ 131], 00:25:23.732 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 184], 00:25:23.732 | 99.99th=[ 192] 00:25:23.732 bw ( KiB/s): min=132096, max=295424, per=10.20%, avg=198328.50, stdev=39555.22, samples=20 00:25:23.732 iops : min= 516, max= 1154, avg=774.70, stdev=154.51, samples=20 00:25:23.732 lat (msec) : 2=0.03%, 4=0.03%, 10=0.61%, 20=2.94%, 50=13.70% 00:25:23.732 lat (msec) : 100=56.19%, 250=26.50% 00:25:23.732 cpu : usr=0.44%, sys=2.54%, ctx=1733, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=7811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job6: (groupid=0, jobs=1): err= 0: pid=1838018: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=701, BW=175MiB/s (184MB/s)(1768MiB/10085msec) 00:25:23.732 slat (usec): min=10, max=86257, avg=1081.66, stdev=4014.22 00:25:23.732 clat (msec): min=3, max=262, avg=90.11, stdev=37.72 00:25:23.732 lat (msec): min=3, max=270, avg=91.20, stdev=38.27 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 44], 20.00th=[ 63], 00:25:23.732 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 96], 00:25:23.732 | 70.00th=[ 104], 80.00th=[ 114], 90.00th=[ 138], 95.00th=[ 165], 00:25:23.732 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 220], 99.95th=[ 239], 00:25:23.732 | 99.99th=[ 264] 00:25:23.732 bw ( KiB/s): min=85504, max=284672, per=9.23%, avg=179378.55, stdev=52182.96, samples=20 00:25:23.732 iops : min= 334, max= 1112, avg=700.60, stdev=203.80, samples=20 00:25:23.732 lat (msec) : 4=0.08%, 10=0.62%, 20=1.58%, 50=9.86%, 100=53.41% 00:25:23.732 lat (msec) : 250=34.43%, 500=0.01% 00:25:23.732 cpu : usr=0.45%, sys=2.40%, ctx=1578, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=7070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job7: (groupid=0, jobs=1): err= 0: pid=1838019: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=480, BW=120MiB/s (126MB/s)(1211MiB/10080msec) 00:25:23.732 slat (usec): min=15, max=109040, avg=2010.33, stdev=5544.83 00:25:23.732 clat (msec): min=4, max=319, avg=131.08, stdev=47.88 00:25:23.732 lat (msec): min=4, max=330, avg=133.09, stdev=48.74 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 58], 20.00th=[ 96], 00:25:23.732 | 30.00th=[ 109], 40.00th=[ 122], 50.00th=[ 138], 60.00th=[ 148], 00:25:23.732 | 70.00th=[ 161], 80.00th=[ 171], 90.00th=[ 184], 95.00th=[ 197], 00:25:23.732 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 305], 00:25:23.732 | 99.99th=[ 321] 00:25:23.732 bw ( KiB/s): min=72704, max=260608, per=6.29%, avg=122332.80, stdev=41510.41, samples=20 00:25:23.732 iops : min= 284, max= 1018, avg=477.85, stdev=162.16, samples=20 00:25:23.732 lat (msec) : 10=0.27%, 20=0.62%, 50=6.40%, 100=15.14%, 250=76.27% 00:25:23.732 lat (msec) : 500=1.30% 00:25:23.732 cpu : usr=0.31%, sys=1.92%, ctx=1059, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job8: (groupid=0, jobs=1): err= 0: pid=1838020: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=705, BW=176MiB/s (185MB/s)(1775MiB/10068msec) 00:25:23.732 slat (usec): min=9, max=122004, avg=715.25, stdev=4130.74 00:25:23.732 clat (msec): min=2, max=296, avg=89.93, stdev=49.58 00:25:23.732 lat (msec): min=2, max=299, avg=90.64, stdev=50.13 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 11], 5.00th=[ 21], 10.00th=[ 28], 20.00th=[ 41], 00:25:23.732 | 30.00th=[ 60], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 97], 00:25:23.732 | 70.00th=[ 109], 80.00th=[ 136], 90.00th=[ 167], 95.00th=[ 180], 00:25:23.732 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 230], 99.95th=[ 239], 00:25:23.732 | 99.99th=[ 296] 00:25:23.732 bw ( KiB/s): min=80896, max=261120, per=9.27%, avg=180168.30, stdev=46895.33, samples=20 00:25:23.732 iops : min= 316, max= 1020, avg=703.75, stdev=183.18, samples=20 00:25:23.732 lat (msec) : 4=0.07%, 10=0.82%, 20=3.75%, 50=20.64%, 100=38.04% 00:25:23.732 lat (msec) : 250=36.66%, 500=0.03% 00:25:23.732 cpu : usr=0.30%, sys=2.07%, ctx=1803, majf=0, minf=4097 00:25:23.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:23.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.732 issued rwts: total=7101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.732 job9: (groupid=0, jobs=1): err= 0: pid=1838021: Wed May 15 16:46:29 2024 00:25:23.732 read: IOPS=746, BW=187MiB/s (196MB/s)(1881MiB/10082msec) 00:25:23.732 slat (usec): min=9, max=91706, avg=921.19, stdev=3759.78 00:25:23.732 clat (usec): min=1753, max=250188, avg=84762.36, stdev=46305.24 00:25:23.732 lat (usec): min=1775, max=267874, avg=85683.55, stdev=46860.84 00:25:23.732 clat percentiles (msec): 00:25:23.732 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 39], 00:25:23.732 | 30.00th=[ 57], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 89], 00:25:23.732 | 70.00th=[ 102], 80.00th=[ 123], 90.00th=[ 159], 95.00th=[ 176], 00:25:23.732 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 243], 00:25:23.732 | 99.99th=[ 251] 00:25:23.732 bw ( KiB/s): min=85504, max=432128, per=9.82%, avg=190942.65, stdev=76802.91, samples=20 00:25:23.732 iops : min= 334, max= 1688, avg=745.85, stdev=300.01, samples=20 00:25:23.732 lat (msec) : 2=0.01%, 4=0.32%, 10=0.61%, 20=2.13%, 50=22.71% 00:25:23.733 lat (msec) : 100=43.31%, 250=30.88%, 500=0.03% 00:25:23.733 cpu : usr=0.48%, sys=2.46%, ctx=1719, majf=0, minf=4097 00:25:23.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:23.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.733 issued rwts: total=7522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.733 job10: (groupid=0, jobs=1): err= 0: pid=1838022: Wed May 15 16:46:29 2024 00:25:23.733 read: IOPS=604, BW=151MiB/s (158MB/s)(1524MiB/10081msec) 00:25:23.733 slat (usec): min=13, max=63775, avg=1552.08, stdev=4687.23 00:25:23.733 clat (usec): min=1846, max=334573, avg=104191.27, stdev=56172.03 00:25:23.733 lat (usec): min=1912, max=334631, avg=105743.36, stdev=57054.71 00:25:23.733 clat percentiles (msec): 00:25:23.733 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 46], 00:25:23.733 | 30.00th=[ 66], 40.00th=[ 83], 50.00th=[ 109], 60.00th=[ 125], 00:25:23.733 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 188], 00:25:23.733 | 99.00th=[ 247], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 309], 00:25:23.733 | 99.99th=[ 334] 00:25:23.733 bw ( KiB/s): min=66560, max=348672, per=7.94%, avg=154384.10, stdev=80272.24, samples=20 00:25:23.733 iops : min= 260, max= 1362, avg=603.05, stdev=313.57, samples=20 00:25:23.733 lat (msec) : 2=0.02%, 4=0.49%, 10=1.44%, 20=2.08%, 50=18.65% 00:25:23.733 lat (msec) : 100=23.35%, 250=53.04%, 500=0.92% 00:25:23.733 cpu : usr=0.28%, sys=2.27%, ctx=1264, majf=0, minf=4097 00:25:23.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:23.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.733 issued rwts: total=6095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.733 00:25:23.733 Run status group 0 (all jobs): 00:25:23.733 READ: bw=1899MiB/s (1991MB/s), 120MiB/s-238MiB/s (126MB/s-249MB/s), io=18.8GiB (20.2GB), run=10066-10126msec 00:25:23.733 00:25:23.733 Disk stats (read/write): 00:25:23.733 nvme0n1: ios=10390/0, merge=0/0, ticks=1233260/0, in_queue=1233260, util=97.18% 00:25:23.733 nvme10n1: ios=19042/0, merge=0/0, ticks=1233119/0, in_queue=1233119, util=97.39% 00:25:23.733 nvme1n1: ios=16202/0, merge=0/0, ticks=1231615/0, in_queue=1231615, util=97.74% 00:25:23.733 nvme2n1: ios=11975/0, merge=0/0, ticks=1236156/0, in_queue=1236156, util=97.86% 00:25:23.733 nvme3n1: ios=14264/0, merge=0/0, ticks=1233165/0, in_queue=1233165, util=97.95% 00:25:23.733 nvme4n1: ios=15393/0, merge=0/0, ticks=1238611/0, in_queue=1238611, util=98.30% 00:25:23.733 nvme5n1: ios=13926/0, merge=0/0, ticks=1235539/0, in_queue=1235539, util=98.44% 00:25:23.733 nvme6n1: ios=9491/0, merge=0/0, ticks=1227750/0, in_queue=1227750, util=98.56% 00:25:23.733 nvme7n1: ios=14003/0, merge=0/0, ticks=1244980/0, in_queue=1244980, util=98.93% 00:25:23.733 nvme8n1: ios=14860/0, merge=0/0, ticks=1237202/0, in_queue=1237202, util=99.12% 00:25:23.733 nvme9n1: ios=12003/0, merge=0/0, ticks=1229966/0, in_queue=1229966, util=99.25% 00:25:23.733 16:46:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:23.733 [global] 00:25:23.733 thread=1 00:25:23.733 invalidate=1 00:25:23.733 rw=randwrite 00:25:23.733 time_based=1 00:25:23.733 runtime=10 00:25:23.733 ioengine=libaio 00:25:23.733 direct=1 00:25:23.733 bs=262144 00:25:23.733 iodepth=64 00:25:23.733 norandommap=1 00:25:23.733 numjobs=1 00:25:23.733 00:25:23.733 [job0] 00:25:23.733 filename=/dev/nvme0n1 00:25:23.733 [job1] 00:25:23.733 filename=/dev/nvme10n1 00:25:23.733 [job2] 00:25:23.733 filename=/dev/nvme1n1 00:25:23.733 [job3] 00:25:23.733 filename=/dev/nvme2n1 00:25:23.733 [job4] 00:25:23.733 filename=/dev/nvme3n1 00:25:23.733 [job5] 00:25:23.733 filename=/dev/nvme4n1 00:25:23.733 [job6] 00:25:23.733 filename=/dev/nvme5n1 00:25:23.733 [job7] 00:25:23.733 filename=/dev/nvme6n1 00:25:23.733 [job8] 00:25:23.733 filename=/dev/nvme7n1 00:25:23.733 [job9] 00:25:23.733 filename=/dev/nvme8n1 00:25:23.733 [job10] 00:25:23.733 filename=/dev/nvme9n1 00:25:23.733 Could not set queue depth (nvme0n1) 00:25:23.733 Could not set queue depth (nvme10n1) 00:25:23.733 Could not set queue depth (nvme1n1) 00:25:23.733 Could not set queue depth (nvme2n1) 00:25:23.733 Could not set queue depth (nvme3n1) 00:25:23.733 Could not set queue depth (nvme4n1) 00:25:23.733 Could not set queue depth (nvme5n1) 00:25:23.733 Could not set queue depth (nvme6n1) 00:25:23.733 Could not set queue depth (nvme7n1) 00:25:23.733 Could not set queue depth (nvme8n1) 00:25:23.733 Could not set queue depth (nvme9n1) 00:25:23.733 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.733 fio-3.35 00:25:23.733 Starting 11 threads 00:25:33.767 00:25:33.768 job0: (groupid=0, jobs=1): err= 0: pid=1839189: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=399, BW=99.9MiB/s (105MB/s)(1017MiB/10180msec); 0 zone resets 00:25:33.768 slat (usec): min=19, max=73647, avg=1575.20, stdev=4659.22 00:25:33.768 clat (msec): min=2, max=569, avg=158.46, stdev=86.39 00:25:33.768 lat (msec): min=2, max=569, avg=160.04, stdev=87.41 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 44], 20.00th=[ 83], 00:25:33.768 | 30.00th=[ 113], 40.00th=[ 131], 50.00th=[ 153], 60.00th=[ 176], 00:25:33.768 | 70.00th=[ 199], 80.00th=[ 226], 90.00th=[ 279], 95.00th=[ 321], 00:25:33.768 | 99.00th=[ 363], 99.50th=[ 372], 99.90th=[ 498], 99.95th=[ 542], 00:25:33.768 | 99.99th=[ 567] 00:25:33.768 bw ( KiB/s): min=45056, max=173056, per=7.52%, avg=102542.00, stdev=31306.26, samples=20 00:25:33.768 iops : min= 176, max= 676, avg=400.55, stdev=122.28, samples=20 00:25:33.768 lat (msec) : 4=0.15%, 10=1.57%, 20=2.19%, 50=7.37%, 100=13.59% 00:25:33.768 lat (msec) : 250=60.37%, 500=14.65%, 750=0.10% 00:25:33.768 cpu : usr=1.33%, sys=1.31%, ctx=2504, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,4068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job1: (groupid=0, jobs=1): err= 0: pid=1839201: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=489, BW=122MiB/s (128MB/s)(1247MiB/10177msec); 0 zone resets 00:25:33.768 slat (usec): min=23, max=128089, avg=1477.43, stdev=4678.19 00:25:33.768 clat (usec): min=1521, max=758639, avg=129062.68, stdev=107827.63 00:25:33.768 lat (usec): min=1579, max=758687, avg=130540.11, stdev=108751.75 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 43], 00:25:33.768 | 30.00th=[ 68], 40.00th=[ 85], 50.00th=[ 102], 60.00th=[ 112], 00:25:33.768 | 70.00th=[ 138], 80.00th=[ 203], 90.00th=[ 279], 95.00th=[ 334], 00:25:33.768 | 99.00th=[ 558], 99.50th=[ 684], 99.90th=[ 743], 99.95th=[ 751], 00:25:33.768 | 99.99th=[ 760] 00:25:33.768 bw ( KiB/s): min=49152, max=326656, per=9.25%, avg=126024.60, stdev=68409.82, samples=20 00:25:33.768 iops : min= 192, max= 1276, avg=492.25, stdev=267.18, samples=20 00:25:33.768 lat (msec) : 2=0.04%, 4=0.48%, 10=2.07%, 20=3.97%, 50=16.67% 00:25:33.768 lat (msec) : 100=25.87%, 250=36.22%, 500=13.32%, 750=1.30%, 1000=0.06% 00:25:33.768 cpu : usr=1.41%, sys=1.68%, ctx=2787, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,4986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job2: (groupid=0, jobs=1): err= 0: pid=1839202: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=492, BW=123MiB/s (129MB/s)(1254MiB/10179msec); 0 zone resets 00:25:33.768 slat (usec): min=19, max=73278, avg=1568.16, stdev=4700.62 00:25:33.768 clat (msec): min=2, max=751, avg=128.28, stdev=104.75 00:25:33.768 lat (msec): min=2, max=751, avg=129.84, stdev=106.14 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 39], 20.00th=[ 70], 00:25:33.768 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 102], 60.00th=[ 112], 00:25:33.768 | 70.00th=[ 138], 80.00th=[ 192], 90.00th=[ 253], 95.00th=[ 321], 00:25:33.768 | 99.00th=[ 634], 99.50th=[ 676], 99.90th=[ 743], 99.95th=[ 751], 00:25:33.768 | 99.99th=[ 751] 00:25:33.768 bw ( KiB/s): min=34816, max=228864, per=9.30%, avg=126725.10, stdev=60980.20, samples=20 00:25:33.768 iops : min= 136, max= 894, avg=495.00, stdev=238.23, samples=20 00:25:33.768 lat (msec) : 4=0.20%, 10=2.15%, 20=3.93%, 50=6.60%, 100=36.20% 00:25:33.768 lat (msec) : 250=40.53%, 500=8.76%, 750=1.58%, 1000=0.06% 00:25:33.768 cpu : usr=1.64%, sys=1.60%, ctx=2573, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,5014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job3: (groupid=0, jobs=1): err= 0: pid=1839203: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=509, BW=127MiB/s (134MB/s)(1291MiB/10126msec); 0 zone resets 00:25:33.768 slat (usec): min=21, max=273054, avg=1578.85, stdev=5979.10 00:25:33.768 clat (msec): min=2, max=880, avg=123.89, stdev=111.46 00:25:33.768 lat (msec): min=2, max=880, avg=125.47, stdev=112.76 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 53], 00:25:33.768 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 104], 00:25:33.768 | 70.00th=[ 129], 80.00th=[ 171], 90.00th=[ 222], 95.00th=[ 330], 00:25:33.768 | 99.00th=[ 751], 99.50th=[ 860], 99.90th=[ 877], 99.95th=[ 877], 00:25:33.768 | 99.99th=[ 877] 00:25:33.768 bw ( KiB/s): min=16384, max=290816, per=9.58%, avg=130539.40, stdev=73800.36, samples=20 00:25:33.768 iops : min= 64, max= 1136, avg=509.90, stdev=288.31, samples=20 00:25:33.768 lat (msec) : 4=0.29%, 10=1.45%, 20=0.54%, 50=12.71%, 100=43.32% 00:25:33.768 lat (msec) : 250=34.54%, 500=5.87%, 750=0.23%, 1000=1.05% 00:25:33.768 cpu : usr=1.57%, sys=1.79%, ctx=2011, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,5162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job4: (groupid=0, jobs=1): err= 0: pid=1839204: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=438, BW=110MiB/s (115MB/s)(1111MiB/10127msec); 0 zone resets 00:25:33.768 slat (usec): min=19, max=216411, avg=1817.45, stdev=5648.74 00:25:33.768 clat (msec): min=3, max=681, avg=144.01, stdev=102.31 00:25:33.768 lat (msec): min=3, max=681, avg=145.83, stdev=103.73 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 34], 00:25:33.768 | 30.00th=[ 56], 40.00th=[ 96], 50.00th=[ 142], 60.00th=[ 176], 00:25:33.768 | 70.00th=[ 203], 80.00th=[ 230], 90.00th=[ 284], 95.00th=[ 300], 00:25:33.768 | 99.00th=[ 439], 99.50th=[ 464], 99.90th=[ 634], 99.95th=[ 659], 00:25:33.768 | 99.99th=[ 684] 00:25:33.768 bw ( KiB/s): min=55296, max=379904, per=8.23%, avg=112117.05, stdev=79865.09, samples=20 00:25:33.768 iops : min= 216, max= 1484, avg=437.95, stdev=311.97, samples=20 00:25:33.768 lat (msec) : 4=0.02%, 10=0.83%, 20=1.06%, 50=26.20%, 100=12.74% 00:25:33.768 lat (msec) : 250=42.39%, 500=16.46%, 750=0.29% 00:25:33.768 cpu : usr=1.52%, sys=1.61%, ctx=2555, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,4442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job5: (groupid=0, jobs=1): err= 0: pid=1839205: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=779, BW=195MiB/s (204MB/s)(1958MiB/10041msec); 0 zone resets 00:25:33.768 slat (usec): min=24, max=47106, avg=1059.57, stdev=2385.75 00:25:33.768 clat (msec): min=4, max=684, avg=80.85, stdev=51.10 00:25:33.768 lat (msec): min=5, max=686, avg=81.91, stdev=51.45 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:25:33.768 | 30.00th=[ 45], 40.00th=[ 61], 50.00th=[ 73], 60.00th=[ 77], 00:25:33.768 | 70.00th=[ 99], 80.00th=[ 114], 90.00th=[ 132], 95.00th=[ 153], 00:25:33.768 | 99.00th=[ 228], 99.50th=[ 300], 99.90th=[ 642], 99.95th=[ 667], 00:25:33.768 | 99.99th=[ 684] 00:25:33.768 bw ( KiB/s): min=77824, max=401408, per=14.59%, avg=198874.10, stdev=88186.09, samples=20 00:25:33.768 iops : min= 304, max= 1568, avg=776.85, stdev=344.48, samples=20 00:25:33.768 lat (msec) : 10=0.05%, 20=0.49%, 50=33.94%, 100=36.00%, 250=28.86% 00:25:33.768 lat (msec) : 500=0.42%, 750=0.24% 00:25:33.768 cpu : usr=2.44%, sys=2.94%, ctx=2790, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,7831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job6: (groupid=0, jobs=1): err= 0: pid=1839206: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=346, BW=86.6MiB/s (90.9MB/s)(878MiB/10127msec); 0 zone resets 00:25:33.768 slat (usec): min=27, max=43147, avg=2789.49, stdev=5355.63 00:25:33.768 clat (msec): min=2, max=332, avg=181.74, stdev=62.80 00:25:33.768 lat (msec): min=3, max=332, avg=184.53, stdev=63.60 00:25:33.768 clat percentiles (msec): 00:25:33.768 | 1.00th=[ 40], 5.00th=[ 94], 10.00th=[ 101], 20.00th=[ 120], 00:25:33.768 | 30.00th=[ 148], 40.00th=[ 167], 50.00th=[ 184], 60.00th=[ 197], 00:25:33.768 | 70.00th=[ 209], 80.00th=[ 232], 90.00th=[ 275], 95.00th=[ 296], 00:25:33.768 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 334], 00:25:33.768 | 99.99th=[ 334] 00:25:33.768 bw ( KiB/s): min=55296, max=149504, per=6.48%, avg=88256.30, stdev=28153.00, samples=20 00:25:33.768 iops : min= 216, max= 584, avg=344.75, stdev=109.97, samples=20 00:25:33.768 lat (msec) : 4=0.06%, 10=0.11%, 20=0.20%, 50=0.88%, 100=8.92% 00:25:33.768 lat (msec) : 250=74.16%, 500=15.67% 00:25:33.768 cpu : usr=1.13%, sys=1.15%, ctx=1014, majf=0, minf=1 00:25:33.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:33.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.768 issued rwts: total=0,3510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.768 job7: (groupid=0, jobs=1): err= 0: pid=1839207: Wed May 15 16:46:40 2024 00:25:33.768 write: IOPS=450, BW=113MiB/s (118MB/s)(1147MiB/10178msec); 0 zone resets 00:25:33.769 slat (usec): min=17, max=226119, avg=1659.86, stdev=6309.92 00:25:33.769 clat (msec): min=2, max=852, avg=140.10, stdev=110.85 00:25:33.769 lat (msec): min=2, max=852, avg=141.76, stdev=112.26 00:25:33.769 clat percentiles (msec): 00:25:33.769 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 53], 20.00th=[ 82], 00:25:33.769 | 30.00th=[ 94], 40.00th=[ 100], 50.00th=[ 110], 60.00th=[ 125], 00:25:33.769 | 70.00th=[ 153], 80.00th=[ 190], 90.00th=[ 224], 95.00th=[ 330], 00:25:33.769 | 99.00th=[ 793], 99.50th=[ 835], 99.90th=[ 852], 99.95th=[ 852], 00:25:33.769 | 99.99th=[ 852] 00:25:33.769 bw ( KiB/s): min=12288, max=195072, per=8.50%, avg=115865.60, stdev=56968.42, samples=20 00:25:33.769 iops : min= 48, max= 762, avg=452.60, stdev=222.53, samples=20 00:25:33.769 lat (msec) : 4=0.33%, 10=2.16%, 20=2.09%, 50=5.06%, 100=30.70% 00:25:33.769 lat (msec) : 250=51.54%, 500=6.67%, 750=0.28%, 1000=1.18% 00:25:33.769 cpu : usr=1.61%, sys=1.47%, ctx=2369, majf=0, minf=1 00:25:33.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:33.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.769 issued rwts: total=0,4589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.769 job8: (groupid=0, jobs=1): err= 0: pid=1839208: Wed May 15 16:46:40 2024 00:25:33.769 write: IOPS=450, BW=113MiB/s (118MB/s)(1147MiB/10172msec); 0 zone resets 00:25:33.769 slat (usec): min=18, max=173263, avg=1549.33, stdev=5450.38 00:25:33.769 clat (msec): min=2, max=608, avg=140.22, stdev=101.44 00:25:33.769 lat (msec): min=2, max=608, avg=141.77, stdev=102.38 00:25:33.769 clat percentiles (msec): 00:25:33.769 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 37], 00:25:33.769 | 30.00th=[ 57], 40.00th=[ 95], 50.00th=[ 131], 60.00th=[ 174], 00:25:33.769 | 70.00th=[ 199], 80.00th=[ 224], 90.00th=[ 275], 95.00th=[ 317], 00:25:33.769 | 99.00th=[ 397], 99.50th=[ 510], 99.90th=[ 592], 99.95th=[ 600], 00:25:33.769 | 99.99th=[ 609] 00:25:33.769 bw ( KiB/s): min=51200, max=302080, per=8.50%, avg=115814.40, stdev=66423.80, samples=20 00:25:33.769 iops : min= 200, max= 1180, avg=452.40, stdev=259.47, samples=20 00:25:33.769 lat (msec) : 4=1.72%, 10=2.16%, 20=4.51%, 50=16.79%, 100=16.66% 00:25:33.769 lat (msec) : 250=44.23%, 500=13.39%, 750=0.55% 00:25:33.769 cpu : usr=1.47%, sys=1.53%, ctx=2551, majf=0, minf=1 00:25:33.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:33.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.769 issued rwts: total=0,4587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.769 job9: (groupid=0, jobs=1): err= 0: pid=1839209: Wed May 15 16:46:40 2024 00:25:33.769 write: IOPS=462, BW=116MiB/s (121MB/s)(1166MiB/10075msec); 0 zone resets 00:25:33.769 slat (usec): min=20, max=96180, avg=1900.97, stdev=4088.71 00:25:33.769 clat (msec): min=12, max=681, avg=136.28, stdev=64.26 00:25:33.769 lat (msec): min=12, max=681, avg=138.18, stdev=64.81 00:25:33.769 clat percentiles (msec): 00:25:33.769 | 1.00th=[ 45], 5.00th=[ 57], 10.00th=[ 84], 20.00th=[ 90], 00:25:33.769 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 120], 60.00th=[ 138], 00:25:33.769 | 70.00th=[ 167], 80.00th=[ 188], 90.00th=[ 209], 95.00th=[ 226], 00:25:33.769 | 99.00th=[ 372], 99.50th=[ 388], 99.90th=[ 617], 99.95th=[ 642], 00:25:33.769 | 99.99th=[ 684] 00:25:33.769 bw ( KiB/s): min=41555, max=190464, per=8.64%, avg=117789.75, stdev=43378.96, samples=20 00:25:33.769 iops : min= 162, max= 744, avg=460.10, stdev=169.48, samples=20 00:25:33.769 lat (msec) : 20=0.13%, 50=4.01%, 100=36.49%, 250=56.26%, 500=2.94% 00:25:33.769 lat (msec) : 750=0.17% 00:25:33.769 cpu : usr=1.61%, sys=1.57%, ctx=1517, majf=0, minf=1 00:25:33.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:33.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.769 issued rwts: total=0,4664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.769 job10: (groupid=0, jobs=1): err= 0: pid=1839210: Wed May 15 16:46:40 2024 00:25:33.769 write: IOPS=528, BW=132MiB/s (138MB/s)(1334MiB/10104msec); 0 zone resets 00:25:33.769 slat (usec): min=24, max=345280, avg=1309.05, stdev=5842.57 00:25:33.769 clat (msec): min=3, max=511, avg=119.66, stdev=71.60 00:25:33.769 lat (msec): min=4, max=511, avg=120.97, stdev=72.38 00:25:33.769 clat percentiles (msec): 00:25:33.769 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 50], 20.00th=[ 73], 00:25:33.769 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 112], 00:25:33.769 | 70.00th=[ 138], 80.00th=[ 180], 90.00th=[ 213], 95.00th=[ 232], 00:25:33.769 | 99.00th=[ 380], 99.50th=[ 451], 99.90th=[ 506], 99.95th=[ 510], 00:25:33.769 | 99.99th=[ 510] 00:25:33.769 bw ( KiB/s): min=67072, max=213504, per=9.91%, avg=135004.80, stdev=50267.12, samples=20 00:25:33.769 iops : min= 262, max= 834, avg=527.35, stdev=196.35, samples=20 00:25:33.769 lat (msec) : 4=0.02%, 10=0.43%, 20=1.54%, 50=8.32%, 100=43.48% 00:25:33.769 lat (msec) : 250=43.22%, 500=2.75%, 750=0.24% 00:25:33.769 cpu : usr=1.73%, sys=1.92%, ctx=2960, majf=0, minf=1 00:25:33.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.769 issued rwts: total=0,5336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.769 00:25:33.769 Run status group 0 (all jobs): 00:25:33.769 WRITE: bw=1331MiB/s (1395MB/s), 86.6MiB/s-195MiB/s (90.9MB/s-204MB/s), io=13.2GiB (14.2GB), run=10041-10180msec 00:25:33.769 00:25:33.769 Disk stats (read/write): 00:25:33.769 nvme0n1: ios=49/8113, merge=0/0, ticks=89/1248601, in_queue=1248690, util=97.62% 00:25:33.769 nvme10n1: ios=45/9958, merge=0/0, ticks=58/1244811, in_queue=1244869, util=97.58% 00:25:33.769 nvme1n1: ios=13/10010, merge=0/0, ticks=349/1241996, in_queue=1242345, util=97.84% 00:25:33.769 nvme2n1: ios=15/10123, merge=0/0, ticks=97/1211471, in_queue=1211568, util=97.75% 00:25:33.769 nvme3n1: ios=0/8683, merge=0/0, ticks=0/1212261, in_queue=1212261, util=97.71% 00:25:33.769 nvme4n1: ios=43/15271, merge=0/0, ticks=1265/1214970, in_queue=1216235, util=99.91% 00:25:33.769 nvme5n1: ios=0/6821, merge=0/0, ticks=0/1203742, in_queue=1203742, util=98.24% 00:25:33.769 nvme6n1: ios=48/9163, merge=0/0, ticks=991/1242446, in_queue=1243437, util=99.94% 00:25:33.769 nvme7n1: ios=47/9166, merge=0/0, ticks=1365/1215086, in_queue=1216451, util=99.97% 00:25:33.769 nvme8n1: ios=0/9085, merge=0/0, ticks=0/1205073, in_queue=1205073, util=98.93% 00:25:33.769 nvme9n1: ios=43/10424, merge=0/0, ticks=3829/1179449, in_queue=1183278, util=99.94% 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:33.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:33.769 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:33.769 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:33.769 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.770 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.770 16:46:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.770 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.770 16:46:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:34.027 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.027 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:34.284 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.284 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:34.541 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.541 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:34.798 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.798 16:46:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:35.056 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:35.056 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.056 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.057 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.057 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.057 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:35.314 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:35.314 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.314 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.315 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.315 rmmod nvme_tcp 00:25:35.315 rmmod nvme_fabrics 00:25:35.572 rmmod nvme_keyring 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1833766 ']' 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1833766 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1833766 ']' 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1833766 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1833766 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1833766' 00:25:35.572 killing process with pid 1833766 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1833766 00:25:35.572 [2024-05-15 16:46:42.602596] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:35.572 16:46:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1833766 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.138 16:46:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.040 16:46:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.040 00:25:38.040 real 1m0.552s 00:25:38.040 user 3m15.943s 00:25:38.040 sys 0m26.405s 00:25:38.040 16:46:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:38.040 16:46:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.040 ************************************ 00:25:38.040 END TEST nvmf_multiconnection 00:25:38.040 ************************************ 00:25:38.040 16:46:45 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:38.040 16:46:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:38.040 16:46:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:38.040 16:46:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.040 ************************************ 00:25:38.040 START TEST nvmf_initiator_timeout 00:25:38.040 ************************************ 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:38.041 * Looking for test storage... 00:25:38.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.041 16:46:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.569 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:40.570 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:40.570 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:40.570 Found net devices under 0000:09:00.0: cvl_0_0 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:40.570 Found net devices under 0000:09:00.1: cvl_0_1 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:25:40.570 00:25:40.570 --- 10.0.0.2 ping statistics --- 00:25:40.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.570 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:25:40.570 00:25:40.570 --- 10.0.0.1 ping statistics --- 00:25:40.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.570 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.570 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1842815 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1842815 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1842815 ']' 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:40.828 16:46:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.828 [2024-05-15 16:46:47.859279] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:25:40.828 [2024-05-15 16:46:47.859376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.828 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.828 [2024-05-15 16:46:47.938547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.828 [2024-05-15 16:46:48.026057] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.828 [2024-05-15 16:46:48.026117] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.829 [2024-05-15 16:46:48.026143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.829 [2024-05-15 16:46:48.026158] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.829 [2024-05-15 16:46:48.026170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.829 [2024-05-15 16:46:48.026250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.829 [2024-05-15 16:46:48.026293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.829 [2024-05-15 16:46:48.026366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.829 [2024-05-15 16:46:48.026369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 Malloc0 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 Delay0 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 [2024-05-15 16:46:48.217449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 [2024-05-15 16:46:48.245458] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:41.086 [2024-05-15 16:46:48.245780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.086 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:41.651 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:41.651 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:41.651 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.651 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:41.651 16:46:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1843120 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:44.177 16:46:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:44.177 [global] 00:25:44.177 thread=1 00:25:44.177 invalidate=1 00:25:44.177 rw=write 00:25:44.177 time_based=1 00:25:44.177 runtime=60 00:25:44.177 ioengine=libaio 00:25:44.177 direct=1 00:25:44.177 bs=4096 00:25:44.177 iodepth=1 00:25:44.177 norandommap=0 00:25:44.177 numjobs=1 00:25:44.177 00:25:44.177 verify_dump=1 00:25:44.177 verify_backlog=512 00:25:44.177 verify_state_save=0 00:25:44.177 do_verify=1 00:25:44.177 verify=crc32c-intel 00:25:44.177 [job0] 00:25:44.177 filename=/dev/nvme0n1 00:25:44.177 Could not set queue depth (nvme0n1) 00:25:44.177 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:44.177 fio-3.35 00:25:44.177 Starting 1 thread 00:25:46.703 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.704 true 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.704 true 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.704 true 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.704 true 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.704 16:46:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.981 true 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.981 true 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.981 true 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.981 true 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:49.981 16:46:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1843120 00:26:46.236 00:26:46.236 job0: (groupid=0, jobs=1): err= 0: pid=1843312: Wed May 15 16:47:51 2024 00:26:46.236 read: IOPS=15, BW=60.4KiB/s (61.9kB/s)(3628KiB/60041msec) 00:26:46.236 slat (usec): min=5, max=11549, avg=38.27, stdev=480.27 00:26:46.236 clat (usec): min=306, max=40910k, avg=65844.50, stdev=1357859.71 00:26:46.236 lat (usec): min=312, max=40910k, avg=65882.77, stdev=1357858.82 00:26:46.236 clat percentiles (usec): 00:26:46.236 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 00:26:46.236 | 20.00th=[ 396], 30.00th=[ 486], 40.00th=[ 506], 00:26:46.236 | 50.00th=[ 635], 60.00th=[ 41157], 70.00th=[ 41157], 00:26:46.236 | 80.00th=[ 41157], 90.00th=[ 41681], 95.00th=[ 42206], 00:26:46.236 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:26:46.236 | 99.95th=[17112761], 99.99th=[17112761] 00:26:46.236 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60041msec); 0 zone resets 00:26:46.236 slat (usec): min=5, max=30727, avg=45.97, stdev=959.77 00:26:46.236 clat (usec): min=193, max=430, avg=224.10, stdev=22.12 00:26:46.236 lat (usec): min=204, max=31027, avg=270.06, stdev=962.43 00:26:46.236 clat percentiles (usec): 00:26:46.237 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:26:46.237 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:26:46.237 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 273], 00:26:46.237 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 408], 99.95th=[ 433], 00:26:46.237 | 99.99th=[ 433] 00:26:46.237 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:26:46.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:26:46.237 lat (usec) : 250=48.63%, 500=21.39%, 750=6.53% 00:26:46.237 lat (msec) : 50=23.41%, >=2000=0.05% 00:26:46.237 cpu : usr=0.03%, sys=0.06%, ctx=1937, majf=0, minf=2 00:26:46.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.237 issued rwts: total=907,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:46.237 00:26:46.237 Run status group 0 (all jobs): 00:26:46.237 READ: bw=60.4KiB/s (61.9kB/s), 60.4KiB/s-60.4KiB/s (61.9kB/s-61.9kB/s), io=3628KiB (3715kB), run=60041-60041msec 00:26:46.237 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60041-60041msec 00:26:46.237 00:26:46.237 Disk stats (read/write): 00:26:46.237 nvme0n1: ios=955/1024, merge=0/0, ticks=20001/217, in_queue=20218, util=99.71% 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:46.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:46.237 nvmf hotplug test: fio successful as expected 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.237 rmmod nvme_tcp 00:26:46.237 rmmod nvme_fabrics 00:26:46.237 rmmod nvme_keyring 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1842815 ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1842815 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1842815 ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1842815 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1842815 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1842815' 00:26:46.237 killing process with pid 1842815 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1842815 00:26:46.237 [2024-05-15 16:47:51.366841] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1842815 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.237 16:47:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.494 16:47:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:46.494 00:26:46.494 real 1m8.470s 00:26:46.494 user 4m10.642s 00:26:46.494 sys 0m6.590s 00:26:46.494 16:47:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:46.494 16:47:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.494 ************************************ 00:26:46.494 END TEST nvmf_initiator_timeout 00:26:46.494 ************************************ 00:26:46.494 16:47:53 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:46.494 16:47:53 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:46.494 16:47:53 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:46.494 16:47:53 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.494 16:47:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:49.026 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:49.026 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.026 16:47:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:49.027 Found net devices under 0000:09:00.0: cvl_0_0 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:49.027 Found net devices under 0000:09:00.1: cvl_0_1 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:49.027 16:47:56 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:49.027 16:47:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:49.027 16:47:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:49.027 16:47:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.027 ************************************ 00:26:49.027 START TEST nvmf_perf_adq 00:26:49.027 ************************************ 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:49.027 * Looking for test storage... 00:26:49.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.027 16:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:51.556 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:51.556 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.556 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:51.557 Found net devices under 0000:09:00.0: cvl_0_0 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:51.557 Found net devices under 0000:09:00.1: cvl_0_1 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:51.557 16:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:52.168 16:47:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:53.541 16:48:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.814 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:58.815 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:58.815 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:58.815 Found net devices under 0000:09:00.0: cvl_0_0 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:58.815 Found net devices under 0000:09:00.1: cvl_0_1 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:26:58.815 00:26:58.815 --- 10.0.0.2 ping statistics --- 00:26:58.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.815 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:26:58.815 00:26:58.815 --- 10.0.0.1 ping statistics --- 00:26:58.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.815 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.815 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1855510 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1855510 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1855510 ']' 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:58.816 16:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.816 [2024-05-15 16:48:05.910802] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:26:58.816 [2024-05-15 16:48:05.910884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.816 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.816 [2024-05-15 16:48:05.991295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.074 [2024-05-15 16:48:06.078804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.074 [2024-05-15 16:48:06.078865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.074 [2024-05-15 16:48:06.078882] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.074 [2024-05-15 16:48:06.078895] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.074 [2024-05-15 16:48:06.078907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.074 [2024-05-15 16:48:06.078991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.074 [2024-05-15 16:48:06.079037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.074 [2024-05-15 16:48:06.079133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.074 [2024-05-15 16:48:06.079136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.074 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 [2024-05-15 16:48:06.302165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 Malloc1 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 [2024-05-15 16:48:06.354578] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:59.332 [2024-05-15 16:48:06.354908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1855543 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:59.332 16:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:59.332 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:01.245 "tick_rate": 2700000000, 00:27:01.245 "poll_groups": [ 00:27:01.245 { 00:27:01.245 "name": "nvmf_tgt_poll_group_000", 00:27:01.245 "admin_qpairs": 1, 00:27:01.245 "io_qpairs": 1, 00:27:01.245 "current_admin_qpairs": 1, 00:27:01.245 "current_io_qpairs": 1, 00:27:01.245 "pending_bdev_io": 0, 00:27:01.245 "completed_nvme_io": 19573, 00:27:01.245 "transports": [ 00:27:01.245 { 00:27:01.245 "trtype": "TCP" 00:27:01.245 } 00:27:01.245 ] 00:27:01.245 }, 00:27:01.245 { 00:27:01.245 "name": "nvmf_tgt_poll_group_001", 00:27:01.245 "admin_qpairs": 0, 00:27:01.245 "io_qpairs": 1, 00:27:01.245 "current_admin_qpairs": 0, 00:27:01.245 "current_io_qpairs": 1, 00:27:01.245 "pending_bdev_io": 0, 00:27:01.245 "completed_nvme_io": 17097, 00:27:01.245 "transports": [ 00:27:01.245 { 00:27:01.245 "trtype": "TCP" 00:27:01.245 } 00:27:01.245 ] 00:27:01.245 }, 00:27:01.245 { 00:27:01.245 "name": "nvmf_tgt_poll_group_002", 00:27:01.245 "admin_qpairs": 0, 00:27:01.245 "io_qpairs": 1, 00:27:01.245 "current_admin_qpairs": 0, 00:27:01.245 "current_io_qpairs": 1, 00:27:01.245 "pending_bdev_io": 0, 00:27:01.245 "completed_nvme_io": 20306, 00:27:01.245 "transports": [ 00:27:01.245 { 00:27:01.245 "trtype": "TCP" 00:27:01.245 } 00:27:01.245 ] 00:27:01.245 }, 00:27:01.245 { 00:27:01.245 "name": "nvmf_tgt_poll_group_003", 00:27:01.245 "admin_qpairs": 0, 00:27:01.245 "io_qpairs": 1, 00:27:01.245 "current_admin_qpairs": 0, 00:27:01.245 "current_io_qpairs": 1, 00:27:01.245 "pending_bdev_io": 0, 00:27:01.245 "completed_nvme_io": 20498, 00:27:01.245 "transports": [ 00:27:01.245 { 00:27:01.245 "trtype": "TCP" 00:27:01.245 } 00:27:01.245 ] 00:27:01.245 } 00:27:01.245 ] 00:27:01.245 }' 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:01.245 16:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1855543 00:27:09.348 Initializing NVMe Controllers 00:27:09.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:09.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:09.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:09.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:09.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:09.349 Initialization complete. Launching workers. 00:27:09.349 ======================================================== 00:27:09.349 Latency(us) 00:27:09.349 Device Information : IOPS MiB/s Average min max 00:27:09.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10805.10 42.21 5923.93 2622.52 7682.49 00:27:09.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8984.40 35.10 7123.71 2821.12 12106.01 00:27:09.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10639.50 41.56 6015.06 2027.58 8658.39 00:27:09.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10288.00 40.19 6222.96 2577.39 9420.20 00:27:09.349 ======================================================== 00:27:09.349 Total : 40717.00 159.05 6288.04 2027.58 12106.01 00:27:09.349 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.349 rmmod nvme_tcp 00:27:09.349 rmmod nvme_fabrics 00:27:09.349 rmmod nvme_keyring 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1855510 ']' 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1855510 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1855510 ']' 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1855510 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1855510 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1855510' 00:27:09.349 killing process with pid 1855510 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1855510 00:27:09.349 [2024-05-15 16:48:16.557931] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:09.349 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1855510 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.606 16:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.136 16:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.136 16:48:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:12.136 16:48:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:12.394 16:48:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:13.766 16:48:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:19.032 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.032 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:19.033 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:19.033 Found net devices under 0000:09:00.0: cvl_0_0 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:19.033 Found net devices under 0000:09:00.1: cvl_0_1 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.033 16:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:27:19.033 00:27:19.033 --- 10.0.0.2 ping statistics --- 00:27:19.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.033 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:27:19.033 00:27:19.033 --- 10.0.0.1 ping statistics --- 00:27:19.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.033 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:19.033 net.core.busy_poll = 1 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:19.033 net.core.busy_read = 1 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1858653 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1858653 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1858653 ']' 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.033 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.291 [2024-05-15 16:48:26.268285] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:19.291 [2024-05-15 16:48:26.268363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.291 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.291 [2024-05-15 16:48:26.340899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.291 [2024-05-15 16:48:26.422597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.291 [2024-05-15 16:48:26.422649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.291 [2024-05-15 16:48:26.422663] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.291 [2024-05-15 16:48:26.422673] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.291 [2024-05-15 16:48:26.422683] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.292 [2024-05-15 16:48:26.422731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.292 [2024-05-15 16:48:26.422791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.292 [2024-05-15 16:48:26.422855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:19.292 [2024-05-15 16:48:26.422857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.292 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 [2024-05-15 16:48:26.658076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 Malloc1 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.549 [2024-05-15 16:48:26.710947] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:19.549 [2024-05-15 16:48:26.711286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1858690 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:19.549 16:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:19.549 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:22.076 "tick_rate": 2700000000, 00:27:22.076 "poll_groups": [ 00:27:22.076 { 00:27:22.076 "name": "nvmf_tgt_poll_group_000", 00:27:22.076 "admin_qpairs": 1, 00:27:22.076 "io_qpairs": 1, 00:27:22.076 "current_admin_qpairs": 1, 00:27:22.076 "current_io_qpairs": 1, 00:27:22.076 "pending_bdev_io": 0, 00:27:22.076 "completed_nvme_io": 24618, 00:27:22.076 "transports": [ 00:27:22.076 { 00:27:22.076 "trtype": "TCP" 00:27:22.076 } 00:27:22.076 ] 00:27:22.076 }, 00:27:22.076 { 00:27:22.076 "name": "nvmf_tgt_poll_group_001", 00:27:22.076 "admin_qpairs": 0, 00:27:22.076 "io_qpairs": 3, 00:27:22.076 "current_admin_qpairs": 0, 00:27:22.076 "current_io_qpairs": 3, 00:27:22.076 "pending_bdev_io": 0, 00:27:22.076 "completed_nvme_io": 25387, 00:27:22.076 "transports": [ 00:27:22.076 { 00:27:22.076 "trtype": "TCP" 00:27:22.076 } 00:27:22.076 ] 00:27:22.076 }, 00:27:22.076 { 00:27:22.076 "name": "nvmf_tgt_poll_group_002", 00:27:22.076 "admin_qpairs": 0, 00:27:22.076 "io_qpairs": 0, 00:27:22.076 "current_admin_qpairs": 0, 00:27:22.076 "current_io_qpairs": 0, 00:27:22.076 "pending_bdev_io": 0, 00:27:22.076 "completed_nvme_io": 0, 00:27:22.076 "transports": [ 00:27:22.076 { 00:27:22.076 "trtype": "TCP" 00:27:22.076 } 00:27:22.076 ] 00:27:22.076 }, 00:27:22.076 { 00:27:22.076 "name": "nvmf_tgt_poll_group_003", 00:27:22.076 "admin_qpairs": 0, 00:27:22.076 "io_qpairs": 0, 00:27:22.076 "current_admin_qpairs": 0, 00:27:22.076 "current_io_qpairs": 0, 00:27:22.076 "pending_bdev_io": 0, 00:27:22.076 "completed_nvme_io": 0, 00:27:22.076 "transports": [ 00:27:22.076 { 00:27:22.076 "trtype": "TCP" 00:27:22.076 } 00:27:22.076 ] 00:27:22.076 } 00:27:22.076 ] 00:27:22.076 }' 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:22.076 16:48:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1858690 00:27:30.193 Initializing NVMe Controllers 00:27:30.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:30.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:30.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:30.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:30.193 Initialization complete. Launching workers. 00:27:30.193 ======================================================== 00:27:30.193 Latency(us) 00:27:30.193 Device Information : IOPS MiB/s Average min max 00:27:30.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4219.70 16.48 15172.22 2984.52 62070.60 00:27:30.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12892.70 50.36 4964.04 1348.56 8412.28 00:27:30.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4501.10 17.58 14218.44 2029.65 63762.49 00:27:30.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4689.60 18.32 13651.54 1939.42 61485.02 00:27:30.193 ======================================================== 00:27:30.193 Total : 26303.09 102.75 9734.25 1348.56 63762.49 00:27:30.193 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.193 rmmod nvme_tcp 00:27:30.193 rmmod nvme_fabrics 00:27:30.193 rmmod nvme_keyring 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1858653 ']' 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1858653 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1858653 ']' 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1858653 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1858653 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1858653' 00:27:30.193 killing process with pid 1858653 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1858653 00:27:30.193 [2024-05-15 16:48:36.934720] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:30.193 16:48:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1858653 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.193 16:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.140 16:48:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.140 16:48:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:32.140 00:27:32.140 real 0m43.142s 00:27:32.140 user 2m31.534s 00:27:32.140 sys 0m13.093s 00:27:32.140 16:48:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:32.140 16:48:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 ************************************ 00:27:32.140 END TEST nvmf_perf_adq 00:27:32.140 ************************************ 00:27:32.140 16:48:39 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:32.140 16:48:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:32.140 16:48:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:32.140 16:48:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 ************************************ 00:27:32.140 START TEST nvmf_shutdown 00:27:32.140 ************************************ 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:32.140 * Looking for test storage... 00:27:32.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:32.140 16:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:32.398 ************************************ 00:27:32.398 START TEST nvmf_shutdown_tc1 00:27:32.398 ************************************ 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.398 16:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:34.927 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:34.927 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:34.927 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:34.928 Found net devices under 0000:09:00.0: cvl_0_0 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:34.928 Found net devices under 0000:09:00.1: cvl_0_1 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:27:34.928 00:27:34.928 --- 10.0.0.2 ping statistics --- 00:27:34.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.928 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:27:34.928 00:27:34.928 --- 10.0.0.1 ping statistics --- 00:27:34.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.928 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1862137 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1862137 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1862137 ']' 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.928 16:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.928 [2024-05-15 16:48:41.946046] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:34.928 [2024-05-15 16:48:41.946135] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.928 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.928 [2024-05-15 16:48:42.018983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.928 [2024-05-15 16:48:42.102564] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.928 [2024-05-15 16:48:42.102621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.928 [2024-05-15 16:48:42.102650] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.928 [2024-05-15 16:48:42.102662] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.928 [2024-05-15 16:48:42.102672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.928 [2024-05-15 16:48:42.102756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.928 [2024-05-15 16:48:42.102821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.928 [2024-05-15 16:48:42.102890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:34.928 [2024-05-15 16:48:42.102893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.186 [2024-05-15 16:48:42.251995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.186 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.186 Malloc1 00:27:35.186 [2024-05-15 16:48:42.345131] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:35.186 [2024-05-15 16:48:42.345512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.186 Malloc2 00:27:35.444 Malloc3 00:27:35.444 Malloc4 00:27:35.444 Malloc5 00:27:35.444 Malloc6 00:27:35.444 Malloc7 00:27:35.444 Malloc8 00:27:35.703 Malloc9 00:27:35.703 Malloc10 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1862316 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1862316 /var/tmp/bdevperf.sock 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1862316 ']' 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:35.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.703 "params": { 00:27:35.703 "name": "Nvme$subsystem", 00:27:35.703 "trtype": "$TEST_TRANSPORT", 00:27:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.703 "adrfam": "ipv4", 00:27:35.703 "trsvcid": "$NVMF_PORT", 00:27:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.703 "hdgst": ${hdgst:-false}, 00:27:35.703 "ddgst": ${ddgst:-false} 00:27:35.703 }, 00:27:35.703 "method": "bdev_nvme_attach_controller" 00:27:35.703 } 00:27:35.703 EOF 00:27:35.703 )") 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.703 "params": { 00:27:35.703 "name": "Nvme$subsystem", 00:27:35.703 "trtype": "$TEST_TRANSPORT", 00:27:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.703 "adrfam": "ipv4", 00:27:35.703 "trsvcid": "$NVMF_PORT", 00:27:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.703 "hdgst": ${hdgst:-false}, 00:27:35.703 "ddgst": ${ddgst:-false} 00:27:35.703 }, 00:27:35.703 "method": "bdev_nvme_attach_controller" 00:27:35.703 } 00:27:35.703 EOF 00:27:35.703 )") 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.703 "params": { 00:27:35.703 "name": "Nvme$subsystem", 00:27:35.703 "trtype": "$TEST_TRANSPORT", 00:27:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.703 "adrfam": "ipv4", 00:27:35.703 "trsvcid": "$NVMF_PORT", 00:27:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.703 "hdgst": ${hdgst:-false}, 00:27:35.703 "ddgst": ${ddgst:-false} 00:27:35.703 }, 00:27:35.703 "method": "bdev_nvme_attach_controller" 00:27:35.703 } 00:27:35.703 EOF 00:27:35.703 )") 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.703 "params": { 00:27:35.703 "name": "Nvme$subsystem", 00:27:35.703 "trtype": "$TEST_TRANSPORT", 00:27:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.703 "adrfam": "ipv4", 00:27:35.703 "trsvcid": "$NVMF_PORT", 00:27:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.703 "hdgst": ${hdgst:-false}, 00:27:35.703 "ddgst": ${ddgst:-false} 00:27:35.703 }, 00:27:35.703 "method": "bdev_nvme_attach_controller" 00:27:35.703 } 00:27:35.703 EOF 00:27:35.703 )") 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.703 "params": { 00:27:35.703 "name": "Nvme$subsystem", 00:27:35.703 "trtype": "$TEST_TRANSPORT", 00:27:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.703 "adrfam": "ipv4", 00:27:35.703 "trsvcid": "$NVMF_PORT", 00:27:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.703 "hdgst": ${hdgst:-false}, 00:27:35.703 "ddgst": ${ddgst:-false} 00:27:35.703 }, 00:27:35.703 "method": "bdev_nvme_attach_controller" 00:27:35.703 } 00:27:35.703 EOF 00:27:35.703 )") 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.703 "params": { 00:27:35.703 "name": "Nvme$subsystem", 00:27:35.703 "trtype": "$TEST_TRANSPORT", 00:27:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.703 "adrfam": "ipv4", 00:27:35.703 "trsvcid": "$NVMF_PORT", 00:27:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.703 "hdgst": ${hdgst:-false}, 00:27:35.703 "ddgst": ${ddgst:-false} 00:27:35.703 }, 00:27:35.703 "method": "bdev_nvme_attach_controller" 00:27:35.703 } 00:27:35.703 EOF 00:27:35.703 )") 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.703 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.703 { 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme$subsystem", 00:27:35.704 "trtype": "$TEST_TRANSPORT", 00:27:35.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "$NVMF_PORT", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.704 "hdgst": ${hdgst:-false}, 00:27:35.704 "ddgst": ${ddgst:-false} 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 } 00:27:35.704 EOF 00:27:35.704 )") 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.704 { 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme$subsystem", 00:27:35.704 "trtype": "$TEST_TRANSPORT", 00:27:35.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "$NVMF_PORT", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.704 "hdgst": ${hdgst:-false}, 00:27:35.704 "ddgst": ${ddgst:-false} 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 } 00:27:35.704 EOF 00:27:35.704 )") 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.704 { 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme$subsystem", 00:27:35.704 "trtype": "$TEST_TRANSPORT", 00:27:35.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "$NVMF_PORT", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.704 "hdgst": ${hdgst:-false}, 00:27:35.704 "ddgst": ${ddgst:-false} 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 } 00:27:35.704 EOF 00:27:35.704 )") 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.704 { 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme$subsystem", 00:27:35.704 "trtype": "$TEST_TRANSPORT", 00:27:35.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "$NVMF_PORT", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.704 "hdgst": ${hdgst:-false}, 00:27:35.704 "ddgst": ${ddgst:-false} 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 } 00:27:35.704 EOF 00:27:35.704 )") 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:35.704 16:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme1", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme2", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme3", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme4", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme5", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme6", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme7", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme8", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme9", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 },{ 00:27:35.704 "params": { 00:27:35.704 "name": "Nvme10", 00:27:35.704 "trtype": "tcp", 00:27:35.704 "traddr": "10.0.0.2", 00:27:35.704 "adrfam": "ipv4", 00:27:35.704 "trsvcid": "4420", 00:27:35.704 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:35.704 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:35.704 "hdgst": false, 00:27:35.704 "ddgst": false 00:27:35.704 }, 00:27:35.704 "method": "bdev_nvme_attach_controller" 00:27:35.704 }' 00:27:35.704 [2024-05-15 16:48:42.857302] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:35.704 [2024-05-15 16:48:42.857379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:35.704 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.704 [2024-05-15 16:48:42.928961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.962 [2024-05-15 16:48:43.013425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1862316 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:37.858 16:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:38.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1862316 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:38.790 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1862137 00:27:38.790 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:38.790 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:38.790 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:38.790 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.791 { 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme$subsystem", 00:27:38.791 "trtype": "$TEST_TRANSPORT", 00:27:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "$NVMF_PORT", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.791 "hdgst": ${hdgst:-false}, 00:27:38.791 "ddgst": ${ddgst:-false} 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 } 00:27:38.791 EOF 00:27:38.791 )") 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:38.791 16:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme1", 00:27:38.791 "trtype": "tcp", 00:27:38.791 "traddr": "10.0.0.2", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "4420", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.791 "hdgst": false, 00:27:38.791 "ddgst": false 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 },{ 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme2", 00:27:38.791 "trtype": "tcp", 00:27:38.791 "traddr": "10.0.0.2", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "4420", 00:27:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.791 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.791 "hdgst": false, 00:27:38.791 "ddgst": false 00:27:38.791 }, 00:27:38.791 "method": "bdev_nvme_attach_controller" 00:27:38.791 },{ 00:27:38.791 "params": { 00:27:38.791 "name": "Nvme3", 00:27:38.791 "trtype": "tcp", 00:27:38.791 "traddr": "10.0.0.2", 00:27:38.791 "adrfam": "ipv4", 00:27:38.791 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme4", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme5", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme6", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme7", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme8", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme9", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 },{ 00:27:38.792 "params": { 00:27:38.792 "name": "Nvme10", 00:27:38.792 "trtype": "tcp", 00:27:38.792 "traddr": "10.0.0.2", 00:27:38.792 "adrfam": "ipv4", 00:27:38.792 "trsvcid": "4420", 00:27:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:38.792 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:38.792 "hdgst": false, 00:27:38.792 "ddgst": false 00:27:38.792 }, 00:27:38.792 "method": "bdev_nvme_attach_controller" 00:27:38.792 }' 00:27:38.792 [2024-05-15 16:48:45.902973] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:38.792 [2024-05-15 16:48:45.903058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862733 ] 00:27:38.792 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.792 [2024-05-15 16:48:45.977189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.091 [2024-05-15 16:48:46.064739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.479 Running I/O for 1 seconds... 00:27:41.851 00:27:41.851 Latency(us) 00:27:41.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme1n1 : 1.14 225.41 14.09 0.00 0.00 275346.20 22233.69 254765.13 00:27:41.851 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme2n1 : 1.15 279.03 17.44 0.00 0.00 221665.05 16117.00 237677.23 00:27:41.851 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme3n1 : 1.09 235.79 14.74 0.00 0.00 259545.32 17282.09 256318.58 00:27:41.851 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme4n1 : 1.14 225.22 14.08 0.00 0.00 262531.60 24466.77 254765.13 00:27:41.851 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme5n1 : 1.16 220.00 13.75 0.00 0.00 269920.14 24563.86 267192.70 00:27:41.851 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme6n1 : 1.10 232.57 14.54 0.00 0.00 249745.64 20680.25 256318.58 00:27:41.851 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme7n1 : 1.14 228.35 14.27 0.00 0.00 244002.05 17670.45 251658.24 00:27:41.851 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme8n1 : 1.15 226.34 14.15 0.00 0.00 248589.37 849.54 265639.25 00:27:41.851 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme9n1 : 1.16 227.34 14.21 0.00 0.00 243485.69 1990.35 288940.94 00:27:41.851 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.851 Verification LBA range: start 0x0 length 0x400 00:27:41.851 Nvme10n1 : 1.17 273.58 17.10 0.00 0.00 199379.17 14660.65 245444.46 00:27:41.851 =================================================================================================================== 00:27:41.851 Total : 2373.65 148.35 0.00 0.00 245659.83 849.54 288940.94 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.851 rmmod nvme_tcp 00:27:41.851 rmmod nvme_fabrics 00:27:41.851 rmmod nvme_keyring 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.851 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1862137 ']' 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1862137 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1862137 ']' 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1862137 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1862137 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1862137' 00:27:41.852 killing process with pid 1862137 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1862137 00:27:41.852 [2024-05-15 16:48:48.987816] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:41.852 16:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1862137 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.417 16:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.318 00:27:44.318 real 0m12.107s 00:27:44.318 user 0m33.751s 00:27:44.318 sys 0m3.559s 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:44.318 ************************************ 00:27:44.318 END TEST nvmf_shutdown_tc1 00:27:44.318 ************************************ 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:44.318 ************************************ 00:27:44.318 START TEST nvmf_shutdown_tc2 00:27:44.318 ************************************ 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.318 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.577 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:44.578 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:44.578 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:44.578 Found net devices under 0000:09:00.0: cvl_0_0 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:44.578 Found net devices under 0000:09:00.1: cvl_0_1 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:44.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:27:44.578 00:27:44.578 --- 10.0.0.2 ping statistics --- 00:27:44.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.578 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:27:44.578 00:27:44.578 --- 10.0.0.1 ping statistics --- 00:27:44.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.578 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:44.578 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1863499 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1863499 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1863499 ']' 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:44.579 16:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.579 [2024-05-15 16:48:51.768184] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:44.579 [2024-05-15 16:48:51.768294] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.836 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.836 [2024-05-15 16:48:51.848855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.836 [2024-05-15 16:48:51.942604] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.836 [2024-05-15 16:48:51.942673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.836 [2024-05-15 16:48:51.942689] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.836 [2024-05-15 16:48:51.942703] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.836 [2024-05-15 16:48:51.942715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.836 [2024-05-15 16:48:51.942784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.836 [2024-05-15 16:48:51.942908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.836 [2024-05-15 16:48:51.942975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:44.836 [2024-05-15 16:48:51.942977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.094 [2024-05-15 16:48:52.100027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.094 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.094 Malloc1 00:27:45.094 [2024-05-15 16:48:52.189265] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:45.094 [2024-05-15 16:48:52.189623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.094 Malloc2 00:27:45.094 Malloc3 00:27:45.094 Malloc4 00:27:45.351 Malloc5 00:27:45.351 Malloc6 00:27:45.351 Malloc7 00:27:45.351 Malloc8 00:27:45.351 Malloc9 00:27:45.608 Malloc10 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1863678 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1863678 /var/tmp/bdevperf.sock 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1863678 ']' 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:45.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:45.608 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.609 { 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme$subsystem", 00:27:45.609 "trtype": "$TEST_TRANSPORT", 00:27:45.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "$NVMF_PORT", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.609 "hdgst": ${hdgst:-false}, 00:27:45.609 "ddgst": ${ddgst:-false} 00:27:45.609 }, 00:27:45.609 "method": "bdev_nvme_attach_controller" 00:27:45.609 } 00:27:45.609 EOF 00:27:45.609 )") 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:45.609 16:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:45.609 "params": { 00:27:45.609 "name": "Nvme1", 00:27:45.609 "trtype": "tcp", 00:27:45.609 "traddr": "10.0.0.2", 00:27:45.609 "adrfam": "ipv4", 00:27:45.609 "trsvcid": "4420", 00:27:45.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:45.609 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme2", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme3", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme4", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme5", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme6", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme7", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme8", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme9", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 },{ 00:27:45.610 "params": { 00:27:45.610 "name": "Nvme10", 00:27:45.610 "trtype": "tcp", 00:27:45.610 "traddr": "10.0.0.2", 00:27:45.610 "adrfam": "ipv4", 00:27:45.610 "trsvcid": "4420", 00:27:45.610 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:45.610 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:45.610 "hdgst": false, 00:27:45.610 "ddgst": false 00:27:45.610 }, 00:27:45.610 "method": "bdev_nvme_attach_controller" 00:27:45.610 }' 00:27:45.610 [2024-05-15 16:48:52.701631] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:45.610 [2024-05-15 16:48:52.701721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863678 ] 00:27:45.610 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.610 [2024-05-15 16:48:52.774669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.866 [2024-05-15 16:48:52.859119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.234 Running I/O for 10 seconds... 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.492 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.749 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:47.750 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:47.750 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:48.007 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:48.007 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:48.007 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:48.007 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:48.007 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.007 16:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.007 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.007 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:48.007 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:48.007 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1863678 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1863678 ']' 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1863678 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1863678 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1863678' 00:27:48.265 killing process with pid 1863678 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1863678 00:27:48.265 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1863678 00:27:48.265 Received shutdown signal, test time was about 1.009623 seconds 00:27:48.265 00:27:48.265 Latency(us) 00:27:48.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.265 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme1n1 : 1.00 255.72 15.98 0.00 0.00 247500.99 20874.43 253211.69 00:27:48.265 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme2n1 : 0.96 199.40 12.46 0.00 0.00 311228.74 21651.15 253211.69 00:27:48.265 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme3n1 : 0.98 261.88 16.37 0.00 0.00 232499.96 19320.98 253211.69 00:27:48.265 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme4n1 : 0.99 257.59 16.10 0.00 0.00 231903.76 15631.55 253211.69 00:27:48.265 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme5n1 : 1.01 253.76 15.86 0.00 0.00 231095.37 20388.98 253211.69 00:27:48.265 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme6n1 : 0.98 195.58 12.22 0.00 0.00 293090.04 39030.33 260978.92 00:27:48.265 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.265 Verification LBA range: start 0x0 length 0x400 00:27:48.265 Nvme7n1 : 0.99 259.60 16.22 0.00 0.00 216286.06 20971.52 248551.35 00:27:48.266 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.266 Verification LBA range: start 0x0 length 0x400 00:27:48.266 Nvme8n1 : 1.00 255.02 15.94 0.00 0.00 216011.66 18252.99 250104.79 00:27:48.266 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.266 Verification LBA range: start 0x0 length 0x400 00:27:48.266 Nvme9n1 : 0.97 198.68 12.42 0.00 0.00 269837.40 21651.15 254765.13 00:27:48.266 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.266 Verification LBA range: start 0x0 length 0x400 00:27:48.266 Nvme10n1 : 0.99 194.08 12.13 0.00 0.00 271592.11 21651.15 281173.71 00:27:48.266 =================================================================================================================== 00:27:48.266 Total : 2331.30 145.71 0.00 0.00 248289.89 15631.55 281173.71 00:27:48.523 16:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1863499 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.455 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.713 rmmod nvme_tcp 00:27:49.713 rmmod nvme_fabrics 00:27:49.713 rmmod nvme_keyring 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1863499 ']' 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1863499 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1863499 ']' 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1863499 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1863499 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1863499' 00:27:49.713 killing process with pid 1863499 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1863499 00:27:49.713 [2024-05-15 16:48:56.746862] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:49.713 16:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1863499 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.279 16:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.180 00:27:52.180 real 0m7.711s 00:27:52.180 user 0m23.359s 00:27:52.180 sys 0m1.546s 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.180 ************************************ 00:27:52.180 END TEST nvmf_shutdown_tc2 00:27:52.180 ************************************ 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.180 ************************************ 00:27:52.180 START TEST nvmf_shutdown_tc3 00:27:52.180 ************************************ 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:52.180 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:52.180 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:52.180 Found net devices under 0000:09:00.0: cvl_0_0 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:52.180 Found net devices under 0000:09:00.1: cvl_0_1 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.180 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.181 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:27:52.439 00:27:52.439 --- 10.0.0.2 ping statistics --- 00:27:52.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.439 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:27:52.439 00:27:52.439 --- 10.0.0.1 ping statistics --- 00:27:52.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.439 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1864589 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1864589 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1864589 ']' 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:52.439 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.439 [2024-05-15 16:48:59.517297] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:52.439 [2024-05-15 16:48:59.517382] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.439 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.439 [2024-05-15 16:48:59.593111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.697 [2024-05-15 16:48:59.680284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.697 [2024-05-15 16:48:59.680335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.697 [2024-05-15 16:48:59.680349] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.697 [2024-05-15 16:48:59.680361] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.697 [2024-05-15 16:48:59.680371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.697 [2024-05-15 16:48:59.680464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.697 [2024-05-15 16:48:59.680529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.697 [2024-05-15 16:48:59.680579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.697 [2024-05-15 16:48:59.680581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 [2024-05-15 16:48:59.843774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.697 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.698 16:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.698 Malloc1 00:27:52.955 [2024-05-15 16:48:59.925135] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:52.955 [2024-05-15 16:48:59.925438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.955 Malloc2 00:27:52.955 Malloc3 00:27:52.955 Malloc4 00:27:52.955 Malloc5 00:27:52.955 Malloc6 00:27:53.213 Malloc7 00:27:53.213 Malloc8 00:27:53.213 Malloc9 00:27:53.213 Malloc10 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1864706 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1864706 /var/tmp/bdevperf.sock 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1864706 ']' 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.213 { 00:27:53.213 "params": { 00:27:53.213 "name": "Nvme$subsystem", 00:27:53.213 "trtype": "$TEST_TRANSPORT", 00:27:53.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.213 "adrfam": "ipv4", 00:27:53.213 "trsvcid": "$NVMF_PORT", 00:27:53.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.213 "hdgst": ${hdgst:-false}, 00:27:53.213 "ddgst": ${ddgst:-false} 00:27:53.213 }, 00:27:53.213 "method": "bdev_nvme_attach_controller" 00:27:53.213 } 00:27:53.213 EOF 00:27:53.213 )") 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.213 { 00:27:53.213 "params": { 00:27:53.213 "name": "Nvme$subsystem", 00:27:53.213 "trtype": "$TEST_TRANSPORT", 00:27:53.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.213 "adrfam": "ipv4", 00:27:53.213 "trsvcid": "$NVMF_PORT", 00:27:53.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.213 "hdgst": ${hdgst:-false}, 00:27:53.213 "ddgst": ${ddgst:-false} 00:27:53.213 }, 00:27:53.213 "method": "bdev_nvme_attach_controller" 00:27:53.213 } 00:27:53.213 EOF 00:27:53.213 )") 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.213 { 00:27:53.213 "params": { 00:27:53.213 "name": "Nvme$subsystem", 00:27:53.213 "trtype": "$TEST_TRANSPORT", 00:27:53.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.213 "adrfam": "ipv4", 00:27:53.213 "trsvcid": "$NVMF_PORT", 00:27:53.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.213 "hdgst": ${hdgst:-false}, 00:27:53.213 "ddgst": ${ddgst:-false} 00:27:53.213 }, 00:27:53.213 "method": "bdev_nvme_attach_controller" 00:27:53.213 } 00:27:53.213 EOF 00:27:53.213 )") 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.213 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.213 { 00:27:53.213 "params": { 00:27:53.213 "name": "Nvme$subsystem", 00:27:53.213 "trtype": "$TEST_TRANSPORT", 00:27:53.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.213 "adrfam": "ipv4", 00:27:53.213 "trsvcid": "$NVMF_PORT", 00:27:53.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.213 "hdgst": ${hdgst:-false}, 00:27:53.213 "ddgst": ${ddgst:-false} 00:27:53.213 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.214 { 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme$subsystem", 00:27:53.214 "trtype": "$TEST_TRANSPORT", 00:27:53.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "$NVMF_PORT", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.214 "hdgst": ${hdgst:-false}, 00:27:53.214 "ddgst": ${ddgst:-false} 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.214 { 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme$subsystem", 00:27:53.214 "trtype": "$TEST_TRANSPORT", 00:27:53.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "$NVMF_PORT", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.214 "hdgst": ${hdgst:-false}, 00:27:53.214 "ddgst": ${ddgst:-false} 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.214 { 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme$subsystem", 00:27:53.214 "trtype": "$TEST_TRANSPORT", 00:27:53.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "$NVMF_PORT", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.214 "hdgst": ${hdgst:-false}, 00:27:53.214 "ddgst": ${ddgst:-false} 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.214 { 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme$subsystem", 00:27:53.214 "trtype": "$TEST_TRANSPORT", 00:27:53.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "$NVMF_PORT", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.214 "hdgst": ${hdgst:-false}, 00:27:53.214 "ddgst": ${ddgst:-false} 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.214 { 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme$subsystem", 00:27:53.214 "trtype": "$TEST_TRANSPORT", 00:27:53.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "$NVMF_PORT", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.214 "hdgst": ${hdgst:-false}, 00:27:53.214 "ddgst": ${ddgst:-false} 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.214 { 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme$subsystem", 00:27:53.214 "trtype": "$TEST_TRANSPORT", 00:27:53.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "$NVMF_PORT", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.214 "hdgst": ${hdgst:-false}, 00:27:53.214 "ddgst": ${ddgst:-false} 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 } 00:27:53.214 EOF 00:27:53.214 )") 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:53.214 16:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme1", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme2", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme3", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme4", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme5", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme6", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme7", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme8", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.214 }, 00:27:53.214 "method": "bdev_nvme_attach_controller" 00:27:53.214 },{ 00:27:53.214 "params": { 00:27:53.214 "name": "Nvme9", 00:27:53.214 "trtype": "tcp", 00:27:53.214 "traddr": "10.0.0.2", 00:27:53.214 "adrfam": "ipv4", 00:27:53.214 "trsvcid": "4420", 00:27:53.214 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:53.214 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:53.214 "hdgst": false, 00:27:53.214 "ddgst": false 00:27:53.215 }, 00:27:53.215 "method": "bdev_nvme_attach_controller" 00:27:53.215 },{ 00:27:53.215 "params": { 00:27:53.215 "name": "Nvme10", 00:27:53.215 "trtype": "tcp", 00:27:53.215 "traddr": "10.0.0.2", 00:27:53.215 "adrfam": "ipv4", 00:27:53.215 "trsvcid": "4420", 00:27:53.215 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:53.215 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:53.215 "hdgst": false, 00:27:53.215 "ddgst": false 00:27:53.215 }, 00:27:53.215 "method": "bdev_nvme_attach_controller" 00:27:53.215 }' 00:27:53.215 [2024-05-15 16:49:00.433502] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:27:53.215 [2024-05-15 16:49:00.433621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864706 ] 00:27:53.472 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.472 [2024-05-15 16:49:00.509896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.472 [2024-05-15 16:49:00.594014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.369 Running I/O for 10 seconds... 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:55.369 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:55.627 16:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1864589 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1864589 ']' 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1864589 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1864589 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1864589' 00:27:55.892 killing process with pid 1864589 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1864589 00:27:55.892 [2024-05-15 16:49:03.086587] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:55.892 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1864589 00:27:55.892 [2024-05-15 16:49:03.087381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.087993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.892 [2024-05-15 16:49:03.088118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.088262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b3a60 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.089995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.090482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd20 is same with the state(5) to be set 00:27:55.893 [2024-05-15 16:49:03.093173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.893 [2024-05-15 16:49:03.093229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.893 [2024-05-15 16:49:03.093251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.893 [2024-05-15 16:49:03.093277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.893 [2024-05-15 16:49:03.093292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.893 [2024-05-15 16:49:03.093306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.893 [2024-05-15 16:49:03.093322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a630 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c02c0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 16:49:03.093772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with [2024-05-15 16:49:03.093789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:27:55.894 id:0 cdw10:00000000 cdw11:00000000 00:27:55.894 [2024-05-15 16:49:03.093804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with [2024-05-15 16:49:03.093806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:55.894 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.093819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1087cb0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.093991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-05-15 16:49:03.094358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-05-15 16:49:03.094412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 [2024-05-15 16:49:03.094456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.094469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 [2024-05-15 16:49:03.094506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with [2024-05-15 16:49:03.094507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.894 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 [2024-05-15 16:49:03.094549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 [2024-05-15 16:49:03.094582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 [2024-05-15 16:49:03.094609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.894 [2024-05-15 16:49:03.094636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with [2024-05-15 16:49:03.094638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1the state(5) to be set 00:27:55.894 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.894 [2024-05-15 16:49:03.094654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with [2024-05-15 16:49:03.094655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.894 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.894 [2024-05-15 16:49:03.094668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.895 [2024-05-15 16:49:03.094672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b43a0 is same with the state(5) to be set 00:27:55.895 [2024-05-15 16:49:03.094688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.094980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.094996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.895 [2024-05-15 16:49:03.095602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.895 [2024-05-15 16:49:03.095616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.095977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.095992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.096008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.096022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.096038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.096053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.096069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.096085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.096088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.096116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.096116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.096146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.096159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12[2024-05-15 16:49:03.096172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.096188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.896 [2024-05-15 16:49:03.096224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.896 [2024-05-15 16:49:03.096227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.896 [2024-05-15 16:49:03.096239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 [2024-05-15 16:49:03.096263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with [2024-05-15 16:49:03.096264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.897 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 [2024-05-15 16:49:03.096291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12[2024-05-15 16:49:03.096316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with [2024-05-15 16:49:03.096331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.897 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 [2024-05-15 16:49:03.096360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with [2024-05-15 16:49:03.096385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12the state(5) to be set 00:27:55.897 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 [2024-05-15 16:49:03.096400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 [2024-05-15 16:49:03.096425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with [2024-05-15 16:49:03.096450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:12the state(5) to be set 00:27:55.897 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.897 [2024-05-15 16:49:03.096466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.897 [2024-05-15 16:49:03.096479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with [2024-05-15 16:49:03.096514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devithe state(5) to be set 00:27:55.897 ce or address) on qpair id 1 00:27:55.897 [2024-05-15 16:49:03.096532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.897 [2024-05-15 16:49:03.096608] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x108c050 was disconnected and freed. reset controller. 00:27:55.897 [2024-05-15 16:49:03.096624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.096974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4840 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.898 [2024-05-15 16:49:03.099833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.099993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ce0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.100682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:55.899 [2024-05-15 16:49:03.100723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c02c0 (9): Bad file descriptor 00:27:55.899 [2024-05-15 16:49:03.102003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.899 [2024-05-15 16:49:03.102222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102357] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.900 [2024-05-15 16:49:03.102366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-05-15 16:49:03.102542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-05-15 16:49:03.102663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c02c0 with addr=10.0.0.2, port=4420 00:27:55.900 [2024-05-15 16:49:03.102690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c02c0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102771] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.900 [2024-05-15 16:49:03.102781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102854] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.900 [2024-05-15 16:49:03.102870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.102896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b51a0 is same with the state(5) to be set 00:27:55.900 [2024-05-15 16:49:03.103366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c02c0 (9): Bad file descriptor 00:27:55.900 [2024-05-15 16:49:03.103416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155a630 (9): Bad file descriptor 00:27:55.900 [2024-05-15 16:49:03.103475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca5a0 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.103682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa8b0 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.103831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1087cb0 (9): Bad file descriptor 00:27:55.901 [2024-05-15 16:49:03.103880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.103982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.901 [2024-05-15 16:49:03.103996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.901 [2024-05-15 16:49:03.104010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf410 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104135] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.901 [2024-05-15 16:49:03.104668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.104996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.105008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.901 [2024-05-15 16:49:03.105021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error [2024-05-15 16:49:03.105398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with state 00:27:55.902 the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:55.902 [2024-05-15 16:49:03.105427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:55.902 [2024-05-15 16:49:03.105439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.105490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.106033] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.902 [2024-05-15 16:49:03.106110] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.902 [2024-05-15 16:49:03.106793] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.902 [2024-05-15 16:49:03.107365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:27:55.902 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.902 [2024-05-15 16:49:03.107519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.902 [2024-05-15 16:49:03.107532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1the state(5) to be set 00:27:55.902 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.902 [2024-05-15 16:49:03.107562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.902 [2024-05-15 16:49:03.107575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.902 [2024-05-15 16:49:03.107588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.107602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.902 the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.902 [2024-05-15 16:49:03.107618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.107669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-05-15 16:49:03.107725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-05-15 16:49:03.107795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-05-15 16:49:03.107865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:55.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:27:55.903 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.107974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.107987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.107999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(5) to be set 00:27:55.903 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.108014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.108027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.108040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.108053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.108066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(5) to be set 00:27:55.903 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.108081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-05-15 16:49:03.108094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.903 [2024-05-15 16:49:03.108107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.903 [2024-05-15 16:49:03.108116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.108150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:49:03.108190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with [2024-05-15 16:49:03.108211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1the state(5) to be set 00:27:55.904 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff880 is same with the state(5) to be set 00:27:55.904 [2024-05-15 16:49:03.108248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.108981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.108996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.904 [2024-05-15 16:49:03.109342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.904 [2024-05-15 16:49:03.109359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109758] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1619770 was disconnected and freed. reset controller. 00:27:55.905 [2024-05-15 16:49:03.109937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.905 [2024-05-15 16:49:03.109961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.905 [2024-05-15 16:49:03.109982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163cf00 is same with the state(5) to be set 00:27:55.905 [2024-05-15 16:49:03.110055] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x163cf00 was disconnected and freed. reset controller. 00:27:55.905 [2024-05-15 16:49:03.111271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:55.905 [2024-05-15 16:49:03.111341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0680 (9): Bad file descriptor 00:27:55.905 [2024-05-15 16:49:03.112340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:55.905 [2024-05-15 16:49:03.112401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb6610 (9): Bad file descriptor 00:27:56.171 [2024-05-15 16:49:03.113044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.171 [2024-05-15 16:49:03.113199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.171 [2024-05-15 16:49:03.113246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d0680 with addr=10.0.0.2, port=4420 00:27:56.171 [2024-05-15 16:49:03.113275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d0680 is same with the state(5) to be set 00:27:56.171 [2024-05-15 16:49:03.113724] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:56.171 [2024-05-15 16:49:03.113777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:56.171 [2024-05-15 16:49:03.113913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.172 [2024-05-15 16:49:03.114032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.172 [2024-05-15 16:49:03.114057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb6610 with addr=10.0.0.2, port=4420 00:27:56.172 [2024-05-15 16:49:03.114073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb6610 is same with the state(5) to be set 00:27:56.172 [2024-05-15 16:49:03.114094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0680 (9): Bad file descriptor 00:27:56.172 [2024-05-15 16:49:03.114143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d04a0 is same with the state(5) to be set 00:27:56.172 [2024-05-15 16:49:03.114324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ca5a0 (9): Bad file descriptor 00:27:56.172 [2024-05-15 16:49:03.114382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.172 [2024-05-15 16:49:03.114502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.114516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15758c0 is same with the state(5) to be set 00:27:56.172 [2024-05-15 16:49:03.114548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aa8b0 (9): Bad file descriptor 00:27:56.172 [2024-05-15 16:49:03.114589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bf410 (9): Bad file descriptor 00:27:56.172 [2024-05-15 16:49:03.114830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.172 [2024-05-15 16:49:03.114951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.172 [2024-05-15 16:49:03.114976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c02c0 with addr=10.0.0.2, port=4420 00:27:56.172 [2024-05-15 16:49:03.114993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c02c0 is same with the state(5) to be set 00:27:56.172 [2024-05-15 16:49:03.115012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb6610 (9): Bad file descriptor 00:27:56.172 [2024-05-15 16:49:03.115030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:56.172 [2024-05-15 16:49:03.115044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:56.172 [2024-05-15 16:49:03.115059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:56.172 [2024-05-15 16:49:03.115120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.172 [2024-05-15 16:49:03.115816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.172 [2024-05-15 16:49:03.115832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.115848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.115864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.115880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.115901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.115916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.115932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.115953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.115971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.115986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.116978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.116995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.117012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.173 [2024-05-15 16:49:03.117027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.173 [2024-05-15 16:49:03.117043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.117059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.117076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.117091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.117107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.117122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.117149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.117175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.117198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.117213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.117239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.117255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.117271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108aeb0 is same with the state(5) to be set 00:27:56.174 [2024-05-15 16:49:03.118569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.118969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.118984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.174 [2024-05-15 16:49:03.119567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.174 [2024-05-15 16:49:03.119584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.119971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.119987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.175 [2024-05-15 16:49:03.120636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.175 [2024-05-15 16:49:03.120651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161bf10 is same with the state(5) to be set 00:27:56.175 [2024-05-15 16:49:03.122299] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.175 [2024-05-15 16:49:03.122327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.175 [2024-05-15 16:49:03.122350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:56.175 [2024-05-15 16:49:03.122404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c02c0 (9): Bad file descriptor 00:27:56.175 [2024-05-15 16:49:03.122426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:56.175 [2024-05-15 16:49:03.122440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:56.175 [2024-05-15 16:49:03.122461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:56.175 [2024-05-15 16:49:03.122586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.175 [2024-05-15 16:49:03.122814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.175 [2024-05-15 16:49:03.122940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.176 [2024-05-15 16:49:03.122965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1087cb0 with addr=10.0.0.2, port=4420 00:27:56.176 [2024-05-15 16:49:03.122982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1087cb0 is same with the state(5) to be set 00:27:56.176 [2024-05-15 16:49:03.123090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.176 [2024-05-15 16:49:03.123200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.176 [2024-05-15 16:49:03.123235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a630 with addr=10.0.0.2, port=4420 00:27:56.176 [2024-05-15 16:49:03.123253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a630 is same with the state(5) to be set 00:27:56.176 [2024-05-15 16:49:03.123269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:56.176 [2024-05-15 16:49:03.123283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:56.176 [2024-05-15 16:49:03.123297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:56.176 [2024-05-15 16:49:03.123871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.176 [2024-05-15 16:49:03.123905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1087cb0 (9): Bad file descriptor 00:27:56.176 [2024-05-15 16:49:03.123928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155a630 (9): Bad file descriptor 00:27:56.176 [2024-05-15 16:49:03.123957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d04a0 (9): Bad file descriptor 00:27:56.176 [2024-05-15 16:49:03.123999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15758c0 (9): Bad file descriptor 00:27:56.176 [2024-05-15 16:49:03.124106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:56.176 [2024-05-15 16:49:03.124161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.176 [2024-05-15 16:49:03.124180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.176 [2024-05-15 16:49:03.124194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.176 [2024-05-15 16:49:03.124212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:56.176 [2024-05-15 16:49:03.124235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:56.176 [2024-05-15 16:49:03.124250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:56.176 [2024-05-15 16:49:03.124328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.176 [2024-05-15 16:49:03.124879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.176 [2024-05-15 16:49:03.124895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.124911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.124927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.124942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.124959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.124974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.124991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.125974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.126005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.126022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.177 [2024-05-15 16:49:03.126040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.177 [2024-05-15 16:49:03.126057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.126403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.126419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639080 is same with the state(5) to be set 00:27:56.178 [2024-05-15 16:49:03.127675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.127969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.127987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.178 [2024-05-15 16:49:03.128524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.178 [2024-05-15 16:49:03.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.128976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.128991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.129007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.129022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.129039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.129054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.129071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.129085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.129102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.129116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.136791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.136841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.136859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.136875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.136892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.136907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.136938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.136955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.136970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.136987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.179 [2024-05-15 16:49:03.137456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.179 [2024-05-15 16:49:03.137477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.137510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a580 is same with the state(5) to be set 00:27:56.180 [2024-05-15 16:49:03.138832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.138857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.138881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.138898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.138915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.138930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.138946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.138961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.138979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.138995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.180 [2024-05-15 16:49:03.139971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.180 [2024-05-15 16:49:03.139988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.181 [2024-05-15 16:49:03.140921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.181 [2024-05-15 16:49:03.140936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c380 is same with the state(5) to be set 00:27:56.181 [2024-05-15 16:49:03.142186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:56.181 [2024-05-15 16:49:03.142224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:56.181 [2024-05-15 16:49:03.142246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.181 [2024-05-15 16:49:03.142261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.181 [2024-05-15 16:49:03.142275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:56.181 [2024-05-15 16:49:03.142293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:56.181 [2024-05-15 16:49:03.142553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.181 [2024-05-15 16:49:03.142684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.142709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d0680 with addr=10.0.0.2, port=4420 00:27:56.182 [2024-05-15 16:49:03.142727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d0680 is same with the state(5) to be set 00:27:56.182 [2024-05-15 16:49:03.142808] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.182 [2024-05-15 16:49:03.142854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0680 (9): Bad file descriptor 00:27:56.182 [2024-05-15 16:49:03.142941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:56.182 [2024-05-15 16:49:03.143088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.143233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.143267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb6610 with addr=10.0.0.2, port=4420 00:27:56.182 [2024-05-15 16:49:03.143284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb6610 is same with the state(5) to be set 00:27:56.182 [2024-05-15 16:49:03.143405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.143529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.143554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c02c0 with addr=10.0.0.2, port=4420 00:27:56.182 [2024-05-15 16:49:03.143570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c02c0 is same with the state(5) to be set 00:27:56.182 [2024-05-15 16:49:03.143757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.143866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.143891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14aa8b0 with addr=10.0.0.2, port=4420 00:27:56.182 [2024-05-15 16:49:03.143913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa8b0 is same with the state(5) to be set 00:27:56.182 [2024-05-15 16:49:03.144031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.144156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.182 [2024-05-15 16:49:03.144180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bf410 with addr=10.0.0.2, port=4420 00:27:56.182 [2024-05-15 16:49:03.144196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf410 is same with the state(5) to be set 00:27:56.182 [2024-05-15 16:49:03.145036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.182 [2024-05-15 16:49:03.145862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.182 [2024-05-15 16:49:03.145877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.145893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.145908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.145925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.145939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.145956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.145971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.145987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.146970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.146986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.147005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 16:49:03.147022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 16:49:03.147037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.147054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.147069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.147085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.147100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.147116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163ba20 is same with the state(5) to be set 00:27:56.184 [2024-05-15 16:49:03.148394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.148980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.148995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.184 [2024-05-15 16:49:03.149449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.184 [2024-05-15 16:49:03.149466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.149983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.149998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.185 [2024-05-15 16:49:03.150449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.185 [2024-05-15 16:49:03.150464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161aa20 is same with the state(5) to be set 00:27:56.185 [2024-05-15 16:49:03.152641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:56.185 [2024-05-15 16:49:03.152674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.185 [2024-05-15 16:49:03.152694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:56.185 task offset: 30208 on job bdev=Nvme2n1 fails 00:27:56.185 00:27:56.185 Latency(us) 00:27:56.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.185 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.185 Job: Nvme1n1 ended in about 0.89 seconds with error 00:27:56.185 Verification LBA range: start 0x0 length 0x400 00:27:56.185 Nvme1n1 : 0.89 143.11 8.94 71.56 0.00 294785.07 22816.24 256318.58 00:27:56.185 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.185 Job: Nvme2n1 ended in about 0.88 seconds with error 00:27:56.185 Verification LBA range: start 0x0 length 0x400 00:27:56.185 Nvme2n1 : 0.88 219.33 13.71 73.11 0.00 211695.45 4223.43 248551.35 00:27:56.185 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.185 Job: Nvme3n1 ended in about 0.90 seconds with error 00:27:56.185 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme3n1 : 0.90 141.67 8.85 70.83 0.00 285627.99 24272.59 260978.92 00:27:56.186 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme4n1 ended in about 0.91 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme4n1 : 0.91 209.92 13.12 69.97 0.00 212382.53 17573.36 251658.24 00:27:56.186 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme5n1 ended in about 0.92 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme5n1 : 0.92 139.43 8.71 69.72 0.00 278365.68 23690.05 254765.13 00:27:56.186 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme6n1 ended in about 0.92 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme6n1 : 0.92 138.50 8.66 69.25 0.00 274358.93 20194.80 254765.13 00:27:56.186 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme7n1 ended in about 0.89 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme7n1 : 0.89 215.03 13.44 1.13 0.00 254836.94 42137.22 240784.12 00:27:56.186 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme8n1 ended in about 0.89 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme8n1 : 0.89 220.92 13.81 67.63 0.00 187158.76 3665.16 234570.33 00:27:56.186 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme9n1 ended in about 0.93 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme9n1 : 0.93 138.00 8.63 69.00 0.00 257318.05 24758.04 293601.28 00:27:56.186 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.186 Job: Nvme10n1 ended in about 0.90 seconds with error 00:27:56.186 Verification LBA range: start 0x0 length 0x400 00:27:56.186 Nvme10n1 : 0.90 142.58 8.91 71.29 0.00 241572.72 18252.99 259425.47 00:27:56.186 =================================================================================================================== 00:27:56.186 Total : 1708.50 106.78 633.48 0.00 245622.52 3665.16 293601.28 00:27:56.186 [2024-05-15 16:49:03.180019] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:56.186 [2024-05-15 16:49:03.180098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:56.186 [2024-05-15 16:49:03.180441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.180579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.180607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ca5a0 with addr=10.0.0.2, port=4420 00:27:56.186 [2024-05-15 16:49:03.180628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca5a0 is same with the state(5) to be set 00:27:56.186 [2024-05-15 16:49:03.180655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb6610 (9): Bad file descriptor 00:27:56.186 [2024-05-15 16:49:03.180678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c02c0 (9): Bad file descriptor 00:27:56.186 [2024-05-15 16:49:03.180698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aa8b0 (9): Bad file descriptor 00:27:56.186 [2024-05-15 16:49:03.180717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bf410 (9): Bad file descriptor 00:27:56.186 [2024-05-15 16:49:03.180735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:56.186 [2024-05-15 16:49:03.180749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:56.186 [2024-05-15 16:49:03.180765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:56.186 [2024-05-15 16:49:03.180835] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.186 [2024-05-15 16:49:03.180861] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.186 [2024-05-15 16:49:03.180886] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.186 [2024-05-15 16:49:03.180906] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.186 [2024-05-15 16:49:03.180926] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.186 [2024-05-15 16:49:03.181058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.186 [2024-05-15 16:49:03.181248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.181373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.181401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a630 with addr=10.0.0.2, port=4420 00:27:56.186 [2024-05-15 16:49:03.181418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a630 is same with the state(5) to be set 00:27:56.186 [2024-05-15 16:49:03.181531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.181650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.181677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1087cb0 with addr=10.0.0.2, port=4420 00:27:56.186 [2024-05-15 16:49:03.181694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1087cb0 is same with the state(5) to be set 00:27:56.186 [2024-05-15 16:49:03.181802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.181909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.181935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15758c0 with addr=10.0.0.2, port=4420 00:27:56.186 [2024-05-15 16:49:03.181951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15758c0 is same with the state(5) to be set 00:27:56.186 [2024-05-15 16:49:03.182065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.182183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.186 [2024-05-15 16:49:03.182210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d04a0 with addr=10.0.0.2, port=4420 00:27:56.186 [2024-05-15 16:49:03.182234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d04a0 is same with the state(5) to be set 00:27:56.186 [2024-05-15 16:49:03.182253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ca5a0 (9): Bad file descriptor 00:27:56.186 [2024-05-15 16:49:03.182272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:56.186 [2024-05-15 16:49:03.182287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:56.186 [2024-05-15 16:49:03.182301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:56.186 [2024-05-15 16:49:03.182322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:56.186 [2024-05-15 16:49:03.182338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:56.186 [2024-05-15 16:49:03.182352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:56.186 [2024-05-15 16:49:03.182369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:56.186 [2024-05-15 16:49:03.182384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:56.186 [2024-05-15 16:49:03.182398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:56.186 [2024-05-15 16:49:03.182415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:56.186 [2024-05-15 16:49:03.182430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:56.186 [2024-05-15 16:49:03.182444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:56.187 [2024-05-15 16:49:03.182464] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.187 [2024-05-15 16:49:03.182505] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.187 [2024-05-15 16:49:03.182534] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.187 [2024-05-15 16:49:03.182555] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.187 [2024-05-15 16:49:03.182574] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.187 [2024-05-15 16:49:03.183174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.183199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.183213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.183233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.183262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155a630 (9): Bad file descriptor 00:27:56.187 [2024-05-15 16:49:03.183285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1087cb0 (9): Bad file descriptor 00:27:56.187 [2024-05-15 16:49:03.183304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15758c0 (9): Bad file descriptor 00:27:56.187 [2024-05-15 16:49:03.183322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d04a0 (9): Bad file descriptor 00:27:56.187 [2024-05-15 16:49:03.183338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:56.187 [2024-05-15 16:49:03.183352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:56.187 [2024-05-15 16:49:03.183366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:56.187 [2024-05-15 16:49:03.183707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:56.187 [2024-05-15 16:49:03.183738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.183763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:56.187 [2024-05-15 16:49:03.183779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:56.187 [2024-05-15 16:49:03.183794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:56.187 [2024-05-15 16:49:03.183811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.187 [2024-05-15 16:49:03.183826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.187 [2024-05-15 16:49:03.183840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.187 [2024-05-15 16:49:03.183856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:56.187 [2024-05-15 16:49:03.183871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:56.187 [2024-05-15 16:49:03.183884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:56.187 [2024-05-15 16:49:03.183901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:56.187 [2024-05-15 16:49:03.183916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:56.187 [2024-05-15 16:49:03.183930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:56.187 [2024-05-15 16:49:03.183987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.184008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.184026] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.184039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.187 [2024-05-15 16:49:03.184145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.187 [2024-05-15 16:49:03.184271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.187 [2024-05-15 16:49:03.184297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d0680 with addr=10.0.0.2, port=4420 00:27:56.187 [2024-05-15 16:49:03.184314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d0680 is same with the state(5) to be set 00:27:56.187 [2024-05-15 16:49:03.184360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0680 (9): Bad file descriptor 00:27:56.187 [2024-05-15 16:49:03.184405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:56.187 [2024-05-15 16:49:03.184424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:56.187 [2024-05-15 16:49:03.184439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:56.187 [2024-05-15 16:49:03.184478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.445 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:56.445 16:49:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1864706 00:27:57.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1864706) - No such process 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.819 rmmod nvme_tcp 00:27:57.819 rmmod nvme_fabrics 00:27:57.819 rmmod nvme_keyring 00:27:57.819 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.820 16:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.720 16:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.720 00:27:59.720 real 0m7.461s 00:27:59.720 user 0m18.038s 00:27:59.720 sys 0m1.509s 00:27:59.720 16:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:59.720 16:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:59.720 ************************************ 00:27:59.720 END TEST nvmf_shutdown_tc3 00:27:59.720 ************************************ 00:27:59.720 16:49:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:59.720 00:27:59.720 real 0m27.514s 00:27:59.720 user 1m15.239s 00:27:59.720 sys 0m6.768s 00:27:59.720 16:49:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:59.720 16:49:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.720 ************************************ 00:27:59.720 END TEST nvmf_shutdown 00:27:59.720 ************************************ 00:27:59.720 16:49:06 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.720 16:49:06 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.720 16:49:06 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:27:59.720 16:49:06 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:59.720 16:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.720 ************************************ 00:27:59.720 START TEST nvmf_multicontroller 00:27:59.720 ************************************ 00:27:59.720 16:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:59.720 * Looking for test storage... 00:27:59.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.721 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.979 16:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:02.511 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:02.511 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:02.511 Found net devices under 0000:09:00.0: cvl_0_0 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:02.511 Found net devices under 0000:09:00.1: cvl_0_1 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.511 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:28:02.512 00:28:02.512 --- 10.0.0.2 ping statistics --- 00:28:02.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.512 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:28:02.512 00:28:02.512 --- 10.0.0.1 ping statistics --- 00:28:02.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.512 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1867577 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1867577 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1867577 ']' 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:02.512 16:49:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.788 [2024-05-15 16:49:09.770931] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:02.788 [2024-05-15 16:49:09.771006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.788 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.788 [2024-05-15 16:49:09.850566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.788 [2024-05-15 16:49:09.937565] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.788 [2024-05-15 16:49:09.937634] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.788 [2024-05-15 16:49:09.937661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.788 [2024-05-15 16:49:09.937675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.788 [2024-05-15 16:49:09.937688] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.788 [2024-05-15 16:49:09.937791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.788 [2024-05-15 16:49:09.937909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.788 [2024-05-15 16:49:09.937912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 [2024-05-15 16:49:10.073794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 Malloc0 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 [2024-05-15 16:49:10.133607] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:03.059 [2024-05-15 16:49:10.133895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 [2024-05-15 16:49:10.141731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 Malloc1 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1867614 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1867614 /var/tmp/bdevperf.sock 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1867614 ']' 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:03.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:03.059 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.317 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:03.317 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:03.317 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:03.317 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.317 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.574 NVMe0n1 00:28:03.574 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.575 1 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.575 request: 00:28:03.575 { 00:28:03.575 "name": "NVMe0", 00:28:03.575 "trtype": "tcp", 00:28:03.575 "traddr": "10.0.0.2", 00:28:03.575 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:03.575 "hostaddr": "10.0.0.2", 00:28:03.575 "hostsvcid": "60000", 00:28:03.575 "adrfam": "ipv4", 00:28:03.575 "trsvcid": "4420", 00:28:03.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.575 "method": "bdev_nvme_attach_controller", 00:28:03.575 "req_id": 1 00:28:03.575 } 00:28:03.575 Got JSON-RPC error response 00:28:03.575 response: 00:28:03.575 { 00:28:03.575 "code": -114, 00:28:03.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:03.575 } 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.575 request: 00:28:03.575 { 00:28:03.575 "name": "NVMe0", 00:28:03.575 "trtype": "tcp", 00:28:03.575 "traddr": "10.0.0.2", 00:28:03.575 "hostaddr": "10.0.0.2", 00:28:03.575 "hostsvcid": "60000", 00:28:03.575 "adrfam": "ipv4", 00:28:03.575 "trsvcid": "4420", 00:28:03.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.575 "method": "bdev_nvme_attach_controller", 00:28:03.575 "req_id": 1 00:28:03.575 } 00:28:03.575 Got JSON-RPC error response 00:28:03.575 response: 00:28:03.575 { 00:28:03.575 "code": -114, 00:28:03.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:03.575 } 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.575 request: 00:28:03.575 { 00:28:03.575 "name": "NVMe0", 00:28:03.575 "trtype": "tcp", 00:28:03.575 "traddr": "10.0.0.2", 00:28:03.575 "hostaddr": "10.0.0.2", 00:28:03.575 "hostsvcid": "60000", 00:28:03.575 "adrfam": "ipv4", 00:28:03.575 "trsvcid": "4420", 00:28:03.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.575 "multipath": "disable", 00:28:03.575 "method": "bdev_nvme_attach_controller", 00:28:03.575 "req_id": 1 00:28:03.575 } 00:28:03.575 Got JSON-RPC error response 00:28:03.575 response: 00:28:03.575 { 00:28:03.575 "code": -114, 00:28:03.575 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:03.575 } 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.575 request: 00:28:03.575 { 00:28:03.575 "name": "NVMe0", 00:28:03.575 "trtype": "tcp", 00:28:03.575 "traddr": "10.0.0.2", 00:28:03.575 "hostaddr": "10.0.0.2", 00:28:03.575 "hostsvcid": "60000", 00:28:03.575 "adrfam": "ipv4", 00:28:03.575 "trsvcid": "4420", 00:28:03.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.575 "multipath": "failover", 00:28:03.575 "method": "bdev_nvme_attach_controller", 00:28:03.575 "req_id": 1 00:28:03.575 } 00:28:03.575 Got JSON-RPC error response 00:28:03.575 response: 00:28:03.575 { 00:28:03.575 "code": -114, 00:28:03.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:03.575 } 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.575 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.832 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.832 00:28:03.832 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:03.833 16:49:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:05.204 0 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1867614 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1867614 ']' 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1867614 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1867614 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1867614' 00:28:05.204 killing process with pid 1867614 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1867614 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1867614 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:05.204 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:05.204 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:05.204 [2024-05-15 16:49:10.241020] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:05.204 [2024-05-15 16:49:10.241117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867614 ] 00:28:05.204 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.204 [2024-05-15 16:49:10.311149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.204 [2024-05-15 16:49:10.393800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.204 [2024-05-15 16:49:10.970833] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name f606c62f-22c2-4c93-97df-d8175cf965d0 already exists 00:28:05.204 [2024-05-15 16:49:10.970874] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:f606c62f-22c2-4c93-97df-d8175cf965d0 alias for bdev NVMe1n1 00:28:05.204 [2024-05-15 16:49:10.970892] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:05.204 Running I/O for 1 seconds... 00:28:05.204 00:28:05.204 Latency(us) 00:28:05.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.204 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:05.204 NVMe0n1 : 1.00 19206.27 75.02 0.00 0.00 6652.75 3252.53 11796.48 00:28:05.204 =================================================================================================================== 00:28:05.204 Total : 19206.27 75.02 0.00 0.00 6652.75 3252.53 11796.48 00:28:05.204 Received shutdown signal, test time was about 1.000000 seconds 00:28:05.204 00:28:05.204 Latency(us) 00:28:05.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.204 =================================================================================================================== 00:28:05.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.205 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.205 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.205 rmmod nvme_tcp 00:28:05.205 rmmod nvme_fabrics 00:28:05.205 rmmod nvme_keyring 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1867577 ']' 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1867577 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1867577 ']' 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1867577 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1867577 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1867577' 00:28:05.462 killing process with pid 1867577 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1867577 00:28:05.462 [2024-05-15 16:49:12.466816] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:05.462 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1867577 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.720 16:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.618 16:49:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:07.618 00:28:07.618 real 0m7.903s 00:28:07.618 user 0m11.498s 00:28:07.618 sys 0m2.631s 00:28:07.618 16:49:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:07.618 16:49:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.618 ************************************ 00:28:07.618 END TEST nvmf_multicontroller 00:28:07.618 ************************************ 00:28:07.618 16:49:14 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:07.618 16:49:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:07.618 16:49:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:07.618 16:49:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.618 ************************************ 00:28:07.618 START TEST nvmf_aer 00:28:07.618 ************************************ 00:28:07.618 16:49:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:07.875 * Looking for test storage... 00:28:07.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.875 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:07.876 16:49:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:10.404 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:10.404 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:10.404 Found net devices under 0000:09:00.0: cvl_0_0 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:10.404 Found net devices under 0000:09:00.1: cvl_0_1 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.404 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:28:10.405 00:28:10.405 --- 10.0.0.2 ping statistics --- 00:28:10.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.405 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:10.405 00:28:10.405 --- 10.0.0.1 ping statistics --- 00:28:10.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.405 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1870111 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1870111 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1870111 ']' 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.405 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.405 [2024-05-15 16:49:17.516434] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:10.405 [2024-05-15 16:49:17.516532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.405 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.405 [2024-05-15 16:49:17.592817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.663 [2024-05-15 16:49:17.676428] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.663 [2024-05-15 16:49:17.676485] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.663 [2024-05-15 16:49:17.676517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.663 [2024-05-15 16:49:17.676529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.663 [2024-05-15 16:49:17.676539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.663 [2024-05-15 16:49:17.676691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.663 [2024-05-15 16:49:17.676752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.663 [2024-05-15 16:49:17.676778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.663 [2024-05-15 16:49:17.676780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 [2024-05-15 16:49:17.819802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 Malloc0 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 [2024-05-15 16:49:17.870242] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:10.663 [2024-05-15 16:49:17.870553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.663 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 [ 00:28:10.663 { 00:28:10.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:10.663 "subtype": "Discovery", 00:28:10.664 "listen_addresses": [], 00:28:10.664 "allow_any_host": true, 00:28:10.664 "hosts": [] 00:28:10.664 }, 00:28:10.664 { 00:28:10.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.664 "subtype": "NVMe", 00:28:10.664 "listen_addresses": [ 00:28:10.664 { 00:28:10.664 "trtype": "TCP", 00:28:10.664 "adrfam": "IPv4", 00:28:10.664 "traddr": "10.0.0.2", 00:28:10.664 "trsvcid": "4420" 00:28:10.664 } 00:28:10.664 ], 00:28:10.664 "allow_any_host": true, 00:28:10.664 "hosts": [], 00:28:10.664 "serial_number": "SPDK00000000000001", 00:28:10.664 "model_number": "SPDK bdev Controller", 00:28:10.664 "max_namespaces": 2, 00:28:10.664 "min_cntlid": 1, 00:28:10.664 "max_cntlid": 65519, 00:28:10.664 "namespaces": [ 00:28:10.664 { 00:28:10.664 "nsid": 1, 00:28:10.664 "bdev_name": "Malloc0", 00:28:10.664 "name": "Malloc0", 00:28:10.664 "nguid": "6031ADD701FD45729DCB83D04C1D0684", 00:28:10.664 "uuid": "6031add7-01fd-4572-9dcb-83d04c1d0684" 00:28:10.664 } 00:28:10.664 ] 00:28:10.664 } 00:28:10.664 ] 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1870255 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:10.664 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:10.921 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.921 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:10.921 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:10.921 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:10.921 16:49:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:10.921 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:10.921 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:10.921 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:10.921 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.178 Malloc1 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.178 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.178 Asynchronous Event Request test 00:28:11.178 Attaching to 10.0.0.2 00:28:11.178 Attached to 10.0.0.2 00:28:11.178 Registering asynchronous event callbacks... 00:28:11.178 Starting namespace attribute notice tests for all controllers... 00:28:11.178 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:11.178 aer_cb - Changed Namespace 00:28:11.178 Cleaning up... 00:28:11.178 [ 00:28:11.178 { 00:28:11.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:11.178 "subtype": "Discovery", 00:28:11.178 "listen_addresses": [], 00:28:11.178 "allow_any_host": true, 00:28:11.178 "hosts": [] 00:28:11.178 }, 00:28:11.178 { 00:28:11.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.178 "subtype": "NVMe", 00:28:11.178 "listen_addresses": [ 00:28:11.178 { 00:28:11.178 "trtype": "TCP", 00:28:11.178 "adrfam": "IPv4", 00:28:11.178 "traddr": "10.0.0.2", 00:28:11.178 "trsvcid": "4420" 00:28:11.178 } 00:28:11.178 ], 00:28:11.178 "allow_any_host": true, 00:28:11.178 "hosts": [], 00:28:11.178 "serial_number": "SPDK00000000000001", 00:28:11.178 "model_number": "SPDK bdev Controller", 00:28:11.178 "max_namespaces": 2, 00:28:11.178 "min_cntlid": 1, 00:28:11.178 "max_cntlid": 65519, 00:28:11.178 "namespaces": [ 00:28:11.178 { 00:28:11.178 "nsid": 1, 00:28:11.178 "bdev_name": "Malloc0", 00:28:11.179 "name": "Malloc0", 00:28:11.179 "nguid": "6031ADD701FD45729DCB83D04C1D0684", 00:28:11.179 "uuid": "6031add7-01fd-4572-9dcb-83d04c1d0684" 00:28:11.179 }, 00:28:11.179 { 00:28:11.179 "nsid": 2, 00:28:11.179 "bdev_name": "Malloc1", 00:28:11.179 "name": "Malloc1", 00:28:11.179 "nguid": "D0FA4E9B75764AB1BAFB3708155CE876", 00:28:11.179 "uuid": "d0fa4e9b-7576-4ab1-bafb-3708155ce876" 00:28:11.179 } 00:28:11.179 ] 00:28:11.179 } 00:28:11.179 ] 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1870255 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.179 rmmod nvme_tcp 00:28:11.179 rmmod nvme_fabrics 00:28:11.179 rmmod nvme_keyring 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1870111 ']' 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1870111 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1870111 ']' 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1870111 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1870111 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1870111' 00:28:11.179 killing process with pid 1870111 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1870111 00:28:11.179 [2024-05-15 16:49:18.398866] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:11.179 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1870111 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.438 16:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.968 16:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.968 00:28:13.968 real 0m5.834s 00:28:13.968 user 0m4.489s 00:28:13.968 sys 0m2.226s 00:28:13.968 16:49:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.968 16:49:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:13.968 ************************************ 00:28:13.968 END TEST nvmf_aer 00:28:13.968 ************************************ 00:28:13.968 16:49:20 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:13.968 16:49:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:13.968 16:49:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:13.968 16:49:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.968 ************************************ 00:28:13.968 START TEST nvmf_async_init 00:28:13.968 ************************************ 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:13.968 * Looking for test storage... 00:28:13.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.968 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e2a5ad510e0144e1827aa9ef1db1e6e7 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.969 16:49:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:16.508 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:16.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:16.508 Found net devices under 0000:09:00.0: cvl_0_0 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:16.508 Found net devices under 0000:09:00.1: cvl_0_1 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.508 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:28:16.509 00:28:16.509 --- 10.0.0.2 ping statistics --- 00:28:16.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.509 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:16.509 00:28:16.509 --- 10.0.0.1 ping statistics --- 00:28:16.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.509 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1872539 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1872539 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1872539 ']' 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 [2024-05-15 16:49:23.372803] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:16.509 [2024-05-15 16:49:23.372892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.509 [2024-05-15 16:49:23.447842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.509 [2024-05-15 16:49:23.528683] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.509 [2024-05-15 16:49:23.528739] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.509 [2024-05-15 16:49:23.528768] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.509 [2024-05-15 16:49:23.528785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.509 [2024-05-15 16:49:23.528795] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.509 [2024-05-15 16:49:23.528829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 [2024-05-15 16:49:23.661328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 null0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e2a5ad510e0144e1827aa9ef1db1e6e7 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.509 [2024-05-15 16:49:23.701359] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:16.509 [2024-05-15 16:49:23.701625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.509 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.767 nvme0n1 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.767 [ 00:28:16.767 { 00:28:16.767 "name": "nvme0n1", 00:28:16.767 "aliases": [ 00:28:16.767 "e2a5ad51-0e01-44e1-827a-a9ef1db1e6e7" 00:28:16.767 ], 00:28:16.767 "product_name": "NVMe disk", 00:28:16.767 "block_size": 512, 00:28:16.767 "num_blocks": 2097152, 00:28:16.767 "uuid": "e2a5ad51-0e01-44e1-827a-a9ef1db1e6e7", 00:28:16.767 "assigned_rate_limits": { 00:28:16.767 "rw_ios_per_sec": 0, 00:28:16.767 "rw_mbytes_per_sec": 0, 00:28:16.767 "r_mbytes_per_sec": 0, 00:28:16.767 "w_mbytes_per_sec": 0 00:28:16.767 }, 00:28:16.767 "claimed": false, 00:28:16.767 "zoned": false, 00:28:16.767 "supported_io_types": { 00:28:16.767 "read": true, 00:28:16.767 "write": true, 00:28:16.767 "unmap": false, 00:28:16.767 "write_zeroes": true, 00:28:16.767 "flush": true, 00:28:16.767 "reset": true, 00:28:16.767 "compare": true, 00:28:16.767 "compare_and_write": true, 00:28:16.767 "abort": true, 00:28:16.767 "nvme_admin": true, 00:28:16.767 "nvme_io": true 00:28:16.767 }, 00:28:16.767 "memory_domains": [ 00:28:16.767 { 00:28:16.767 "dma_device_id": "system", 00:28:16.767 "dma_device_type": 1 00:28:16.767 } 00:28:16.767 ], 00:28:16.767 "driver_specific": { 00:28:16.767 "nvme": [ 00:28:16.767 { 00:28:16.767 "trid": { 00:28:16.767 "trtype": "TCP", 00:28:16.767 "adrfam": "IPv4", 00:28:16.767 "traddr": "10.0.0.2", 00:28:16.767 "trsvcid": "4420", 00:28:16.767 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:16.767 }, 00:28:16.767 "ctrlr_data": { 00:28:16.767 "cntlid": 1, 00:28:16.767 "vendor_id": "0x8086", 00:28:16.767 "model_number": "SPDK bdev Controller", 00:28:16.767 "serial_number": "00000000000000000000", 00:28:16.767 "firmware_revision": "24.05", 00:28:16.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.767 "oacs": { 00:28:16.767 "security": 0, 00:28:16.767 "format": 0, 00:28:16.767 "firmware": 0, 00:28:16.767 "ns_manage": 0 00:28:16.767 }, 00:28:16.767 "multi_ctrlr": true, 00:28:16.767 "ana_reporting": false 00:28:16.767 }, 00:28:16.767 "vs": { 00:28:16.767 "nvme_version": "1.3" 00:28:16.767 }, 00:28:16.767 "ns_data": { 00:28:16.767 "id": 1, 00:28:16.767 "can_share": true 00:28:16.767 } 00:28:16.767 } 00:28:16.767 ], 00:28:16.767 "mp_policy": "active_passive" 00:28:16.767 } 00:28:16.767 } 00:28:16.767 ] 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.767 16:49:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.767 [2024-05-15 16:49:23.954103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:16.767 [2024-05-15 16:49:23.954191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2697a80 (9): Bad file descriptor 00:28:17.025 [2024-05-15 16:49:24.096373] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 [ 00:28:17.025 { 00:28:17.025 "name": "nvme0n1", 00:28:17.025 "aliases": [ 00:28:17.025 "e2a5ad51-0e01-44e1-827a-a9ef1db1e6e7" 00:28:17.025 ], 00:28:17.025 "product_name": "NVMe disk", 00:28:17.025 "block_size": 512, 00:28:17.025 "num_blocks": 2097152, 00:28:17.025 "uuid": "e2a5ad51-0e01-44e1-827a-a9ef1db1e6e7", 00:28:17.025 "assigned_rate_limits": { 00:28:17.025 "rw_ios_per_sec": 0, 00:28:17.025 "rw_mbytes_per_sec": 0, 00:28:17.025 "r_mbytes_per_sec": 0, 00:28:17.025 "w_mbytes_per_sec": 0 00:28:17.025 }, 00:28:17.025 "claimed": false, 00:28:17.025 "zoned": false, 00:28:17.025 "supported_io_types": { 00:28:17.025 "read": true, 00:28:17.025 "write": true, 00:28:17.025 "unmap": false, 00:28:17.025 "write_zeroes": true, 00:28:17.025 "flush": true, 00:28:17.025 "reset": true, 00:28:17.025 "compare": true, 00:28:17.025 "compare_and_write": true, 00:28:17.025 "abort": true, 00:28:17.025 "nvme_admin": true, 00:28:17.025 "nvme_io": true 00:28:17.025 }, 00:28:17.025 "memory_domains": [ 00:28:17.025 { 00:28:17.025 "dma_device_id": "system", 00:28:17.025 "dma_device_type": 1 00:28:17.025 } 00:28:17.025 ], 00:28:17.025 "driver_specific": { 00:28:17.025 "nvme": [ 00:28:17.025 { 00:28:17.025 "trid": { 00:28:17.025 "trtype": "TCP", 00:28:17.025 "adrfam": "IPv4", 00:28:17.025 "traddr": "10.0.0.2", 00:28:17.025 "trsvcid": "4420", 00:28:17.025 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:17.025 }, 00:28:17.025 "ctrlr_data": { 00:28:17.025 "cntlid": 2, 00:28:17.025 "vendor_id": "0x8086", 00:28:17.025 "model_number": "SPDK bdev Controller", 00:28:17.025 "serial_number": "00000000000000000000", 00:28:17.025 "firmware_revision": "24.05", 00:28:17.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.025 "oacs": { 00:28:17.025 "security": 0, 00:28:17.025 "format": 0, 00:28:17.025 "firmware": 0, 00:28:17.025 "ns_manage": 0 00:28:17.025 }, 00:28:17.025 "multi_ctrlr": true, 00:28:17.025 "ana_reporting": false 00:28:17.025 }, 00:28:17.025 "vs": { 00:28:17.025 "nvme_version": "1.3" 00:28:17.025 }, 00:28:17.025 "ns_data": { 00:28:17.025 "id": 1, 00:28:17.025 "can_share": true 00:28:17.025 } 00:28:17.025 } 00:28:17.025 ], 00:28:17.025 "mp_policy": "active_passive" 00:28:17.025 } 00:28:17.025 } 00:28:17.025 ] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.r3YlP88BaM 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.r3YlP88BaM 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 [2024-05-15 16:49:24.146759] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:17.025 [2024-05-15 16:49:24.146882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.r3YlP88BaM 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 [2024-05-15 16:49:24.154779] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.r3YlP88BaM 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 [2024-05-15 16:49:24.162794] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:17.025 [2024-05-15 16:49:24.162852] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:17.025 nvme0n1 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.025 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 [ 00:28:17.025 { 00:28:17.025 "name": "nvme0n1", 00:28:17.025 "aliases": [ 00:28:17.025 "e2a5ad51-0e01-44e1-827a-a9ef1db1e6e7" 00:28:17.025 ], 00:28:17.025 "product_name": "NVMe disk", 00:28:17.025 "block_size": 512, 00:28:17.025 "num_blocks": 2097152, 00:28:17.025 "uuid": "e2a5ad51-0e01-44e1-827a-a9ef1db1e6e7", 00:28:17.026 "assigned_rate_limits": { 00:28:17.026 "rw_ios_per_sec": 0, 00:28:17.026 "rw_mbytes_per_sec": 0, 00:28:17.026 "r_mbytes_per_sec": 0, 00:28:17.026 "w_mbytes_per_sec": 0 00:28:17.026 }, 00:28:17.026 "claimed": false, 00:28:17.026 "zoned": false, 00:28:17.026 "supported_io_types": { 00:28:17.026 "read": true, 00:28:17.026 "write": true, 00:28:17.026 "unmap": false, 00:28:17.026 "write_zeroes": true, 00:28:17.026 "flush": true, 00:28:17.026 "reset": true, 00:28:17.026 "compare": true, 00:28:17.026 "compare_and_write": true, 00:28:17.026 "abort": true, 00:28:17.026 "nvme_admin": true, 00:28:17.026 "nvme_io": true 00:28:17.026 }, 00:28:17.026 "memory_domains": [ 00:28:17.026 { 00:28:17.026 "dma_device_id": "system", 00:28:17.026 "dma_device_type": 1 00:28:17.026 } 00:28:17.026 ], 00:28:17.026 "driver_specific": { 00:28:17.026 "nvme": [ 00:28:17.026 { 00:28:17.026 "trid": { 00:28:17.026 "trtype": "TCP", 00:28:17.026 "adrfam": "IPv4", 00:28:17.026 "traddr": "10.0.0.2", 00:28:17.026 "trsvcid": "4421", 00:28:17.026 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:17.026 }, 00:28:17.026 "ctrlr_data": { 00:28:17.026 "cntlid": 3, 00:28:17.026 "vendor_id": "0x8086", 00:28:17.026 "model_number": "SPDK bdev Controller", 00:28:17.026 "serial_number": "00000000000000000000", 00:28:17.026 "firmware_revision": "24.05", 00:28:17.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.026 "oacs": { 00:28:17.026 "security": 0, 00:28:17.026 "format": 0, 00:28:17.026 "firmware": 0, 00:28:17.026 "ns_manage": 0 00:28:17.026 }, 00:28:17.026 "multi_ctrlr": true, 00:28:17.026 "ana_reporting": false 00:28:17.026 }, 00:28:17.026 "vs": { 00:28:17.026 "nvme_version": "1.3" 00:28:17.026 }, 00:28:17.026 "ns_data": { 00:28:17.026 "id": 1, 00:28:17.026 "can_share": true 00:28:17.026 } 00:28:17.026 } 00:28:17.026 ], 00:28:17.026 "mp_policy": "active_passive" 00:28:17.026 } 00:28:17.026 } 00:28:17.026 ] 00:28:17.026 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.026 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.026 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.026 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.r3YlP88BaM 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.284 rmmod nvme_tcp 00:28:17.284 rmmod nvme_fabrics 00:28:17.284 rmmod nvme_keyring 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1872539 ']' 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1872539 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1872539 ']' 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1872539 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1872539 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1872539' 00:28:17.284 killing process with pid 1872539 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1872539 00:28:17.284 [2024-05-15 16:49:24.354438] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:17.284 [2024-05-15 16:49:24.354475] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:17.284 [2024-05-15 16:49:24.354491] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:17.284 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1872539 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.543 16:49:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.443 16:49:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.443 00:28:19.443 real 0m5.869s 00:28:19.443 user 0m2.188s 00:28:19.443 sys 0m2.072s 00:28:19.443 16:49:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.443 16:49:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:19.443 ************************************ 00:28:19.443 END TEST nvmf_async_init 00:28:19.443 ************************************ 00:28:19.443 16:49:26 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:19.443 16:49:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.443 16:49:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.443 16:49:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.443 ************************************ 00:28:19.443 START TEST dma 00:28:19.443 ************************************ 00:28:19.443 16:49:26 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:19.701 * Looking for test storage... 00:28:19.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.701 16:49:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.701 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.701 16:49:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.701 16:49:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.701 16:49:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.702 16:49:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:19.702 16:49:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.702 16:49:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.702 16:49:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:19.702 16:49:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:19.702 00:28:19.702 real 0m0.071s 00:28:19.702 user 0m0.032s 00:28:19.702 sys 0m0.044s 00:28:19.702 16:49:26 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.702 16:49:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:19.702 ************************************ 00:28:19.702 END TEST dma 00:28:19.702 ************************************ 00:28:19.702 16:49:26 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:19.702 16:49:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.702 16:49:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.702 16:49:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.702 ************************************ 00:28:19.702 START TEST nvmf_identify 00:28:19.702 ************************************ 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:19.702 * Looking for test storage... 00:28:19.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.702 16:49:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:22.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:22.261 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:22.261 Found net devices under 0000:09:00.0: cvl_0_0 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:22.261 Found net devices under 0000:09:00.1: cvl_0_1 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.261 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:28:22.262 00:28:22.262 --- 10.0.0.2 ping statistics --- 00:28:22.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.262 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:22.262 00:28:22.262 --- 10.0.0.1 ping statistics --- 00:28:22.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.262 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1875023 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1875023 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1875023 ']' 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.262 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.262 [2024-05-15 16:49:29.416676] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:22.262 [2024-05-15 16:49:29.416759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.262 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.520 [2024-05-15 16:49:29.490681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.520 [2024-05-15 16:49:29.577720] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.520 [2024-05-15 16:49:29.577788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.520 [2024-05-15 16:49:29.577802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.520 [2024-05-15 16:49:29.577812] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.520 [2024-05-15 16:49:29.577836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.520 [2024-05-15 16:49:29.577925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.520 [2024-05-15 16:49:29.577991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.520 [2024-05-15 16:49:29.578018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.520 [2024-05-15 16:49:29.578019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.520 [2024-05-15 16:49:29.719854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.520 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.780 Malloc0 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.780 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.781 [2024-05-15 16:49:29.794949] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:22.781 [2024-05-15 16:49:29.795271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.781 [ 00:28:22.781 { 00:28:22.781 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:22.781 "subtype": "Discovery", 00:28:22.781 "listen_addresses": [ 00:28:22.781 { 00:28:22.781 "trtype": "TCP", 00:28:22.781 "adrfam": "IPv4", 00:28:22.781 "traddr": "10.0.0.2", 00:28:22.781 "trsvcid": "4420" 00:28:22.781 } 00:28:22.781 ], 00:28:22.781 "allow_any_host": true, 00:28:22.781 "hosts": [] 00:28:22.781 }, 00:28:22.781 { 00:28:22.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.781 "subtype": "NVMe", 00:28:22.781 "listen_addresses": [ 00:28:22.781 { 00:28:22.781 "trtype": "TCP", 00:28:22.781 "adrfam": "IPv4", 00:28:22.781 "traddr": "10.0.0.2", 00:28:22.781 "trsvcid": "4420" 00:28:22.781 } 00:28:22.781 ], 00:28:22.781 "allow_any_host": true, 00:28:22.781 "hosts": [], 00:28:22.781 "serial_number": "SPDK00000000000001", 00:28:22.781 "model_number": "SPDK bdev Controller", 00:28:22.781 "max_namespaces": 32, 00:28:22.781 "min_cntlid": 1, 00:28:22.781 "max_cntlid": 65519, 00:28:22.781 "namespaces": [ 00:28:22.781 { 00:28:22.781 "nsid": 1, 00:28:22.781 "bdev_name": "Malloc0", 00:28:22.781 "name": "Malloc0", 00:28:22.781 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:22.781 "eui64": "ABCDEF0123456789", 00:28:22.781 "uuid": "87cf5dab-4e7b-4656-abe5-7fbc92de7bdd" 00:28:22.781 } 00:28:22.781 ] 00:28:22.781 } 00:28:22.781 ] 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.781 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:22.781 [2024-05-15 16:49:29.834512] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:22.781 [2024-05-15 16:49:29.834566] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875050 ] 00:28:22.781 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.781 [2024-05-15 16:49:29.870860] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:22.781 [2024-05-15 16:49:29.870920] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:22.781 [2024-05-15 16:49:29.870930] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:22.781 [2024-05-15 16:49:29.870945] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:22.781 [2024-05-15 16:49:29.870959] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:22.781 [2024-05-15 16:49:29.871245] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:22.781 [2024-05-15 16:49:29.871308] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a6b120 0 00:28:22.781 [2024-05-15 16:49:29.877245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:22.781 [2024-05-15 16:49:29.877267] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:22.781 [2024-05-15 16:49:29.877276] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:22.781 [2024-05-15 16:49:29.877282] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:22.781 [2024-05-15 16:49:29.877335] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.877351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.877359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.877376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:22.781 [2024-05-15 16:49:29.877403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.781 [2024-05-15 16:49:29.885244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.781 [2024-05-15 16:49:29.885263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.781 [2024-05-15 16:49:29.885271] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885278] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.781 [2024-05-15 16:49:29.885295] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:22.781 [2024-05-15 16:49:29.885305] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:22.781 [2024-05-15 16:49:29.885314] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:22.781 [2024-05-15 16:49:29.885336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.885363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.781 [2024-05-15 16:49:29.885387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.781 [2024-05-15 16:49:29.885531] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.781 [2024-05-15 16:49:29.885547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.781 [2024-05-15 16:49:29.885554] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.781 [2024-05-15 16:49:29.885572] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:22.781 [2024-05-15 16:49:29.885585] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:22.781 [2024-05-15 16:49:29.885597] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.885622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.781 [2024-05-15 16:49:29.885643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.781 [2024-05-15 16:49:29.885805] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.781 [2024-05-15 16:49:29.885818] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.781 [2024-05-15 16:49:29.885825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885832] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.781 [2024-05-15 16:49:29.885841] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:22.781 [2024-05-15 16:49:29.885855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:22.781 [2024-05-15 16:49:29.885867] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885879] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.885887] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.885897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.781 [2024-05-15 16:49:29.885918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.781 [2024-05-15 16:49:29.886079] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.781 [2024-05-15 16:49:29.886094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.781 [2024-05-15 16:49:29.886101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886108] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.781 [2024-05-15 16:49:29.886118] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:22.781 [2024-05-15 16:49:29.886135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.886161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.781 [2024-05-15 16:49:29.886182] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.781 [2024-05-15 16:49:29.886340] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.781 [2024-05-15 16:49:29.886355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.781 [2024-05-15 16:49:29.886362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.781 [2024-05-15 16:49:29.886379] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:22.781 [2024-05-15 16:49:29.886387] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:22.781 [2024-05-15 16:49:29.886401] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:22.781 [2024-05-15 16:49:29.886510] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:22.781 [2024-05-15 16:49:29.886519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:22.781 [2024-05-15 16:49:29.886533] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886540] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.886558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.781 [2024-05-15 16:49:29.886579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.781 [2024-05-15 16:49:29.886697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.781 [2024-05-15 16:49:29.886712] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.781 [2024-05-15 16:49:29.886719] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.781 [2024-05-15 16:49:29.886735] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:22.781 [2024-05-15 16:49:29.886757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886766] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.781 [2024-05-15 16:49:29.886773] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.781 [2024-05-15 16:49:29.886784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.782 [2024-05-15 16:49:29.886804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.782 [2024-05-15 16:49:29.886961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.782 [2024-05-15 16:49:29.886973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.782 [2024-05-15 16:49:29.886980] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.886987] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.782 [2024-05-15 16:49:29.886997] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:22.782 [2024-05-15 16:49:29.887005] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:22.782 [2024-05-15 16:49:29.887018] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:22.782 [2024-05-15 16:49:29.887033] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:22.782 [2024-05-15 16:49:29.887048] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.782 [2024-05-15 16:49:29.887088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.782 [2024-05-15 16:49:29.887267] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.782 [2024-05-15 16:49:29.887283] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.782 [2024-05-15 16:49:29.887290] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887297] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6b120): datao=0, datal=4096, cccid=0 00:28:22.782 [2024-05-15 16:49:29.887305] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac41f0) on tqpair(0x1a6b120): expected_datao=0, payload_size=4096 00:28:22.782 [2024-05-15 16:49:29.887313] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887334] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887343] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.782 [2024-05-15 16:49:29.887377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.782 [2024-05-15 16:49:29.887384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.782 [2024-05-15 16:49:29.887404] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:22.782 [2024-05-15 16:49:29.887413] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:22.782 [2024-05-15 16:49:29.887421] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:22.782 [2024-05-15 16:49:29.887434] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:22.782 [2024-05-15 16:49:29.887447] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:22.782 [2024-05-15 16:49:29.887456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:22.782 [2024-05-15 16:49:29.887470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:22.782 [2024-05-15 16:49:29.887483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887491] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:22.782 [2024-05-15 16:49:29.887529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.782 [2024-05-15 16:49:29.887689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.782 [2024-05-15 16:49:29.887704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.782 [2024-05-15 16:49:29.887712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887719] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac41f0) on tqpair=0x1a6b120 00:28:22.782 [2024-05-15 16:49:29.887732] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.782 [2024-05-15 16:49:29.887766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887773] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887780] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.782 [2024-05-15 16:49:29.887798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887812] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.782 [2024-05-15 16:49:29.887830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887837] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.782 [2024-05-15 16:49:29.887862] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:22.782 [2024-05-15 16:49:29.887881] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:22.782 [2024-05-15 16:49:29.887894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.887901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.887911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.782 [2024-05-15 16:49:29.887937] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac41f0, cid 0, qid 0 00:28:22.782 [2024-05-15 16:49:29.887948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4350, cid 1, qid 0 00:28:22.782 [2024-05-15 16:49:29.887956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac44b0, cid 2, qid 0 00:28:22.782 [2024-05-15 16:49:29.887964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.782 [2024-05-15 16:49:29.887972] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4770, cid 4, qid 0 00:28:22.782 [2024-05-15 16:49:29.888127] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.782 [2024-05-15 16:49:29.888140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.782 [2024-05-15 16:49:29.888147] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888154] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4770) on tqpair=0x1a6b120 00:28:22.782 [2024-05-15 16:49:29.888164] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:22.782 [2024-05-15 16:49:29.888173] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:22.782 [2024-05-15 16:49:29.888190] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888199] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.888210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.782 [2024-05-15 16:49:29.888239] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4770, cid 4, qid 0 00:28:22.782 [2024-05-15 16:49:29.888367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.782 [2024-05-15 16:49:29.888382] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.782 [2024-05-15 16:49:29.888389] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888396] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6b120): datao=0, datal=4096, cccid=4 00:28:22.782 [2024-05-15 16:49:29.888404] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4770) on tqpair(0x1a6b120): expected_datao=0, payload_size=4096 00:28:22.782 [2024-05-15 16:49:29.888412] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888452] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888461] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.782 [2024-05-15 16:49:29.888574] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.782 [2024-05-15 16:49:29.888581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4770) on tqpair=0x1a6b120 00:28:22.782 [2024-05-15 16:49:29.888607] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:22.782 [2024-05-15 16:49:29.888643] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888654] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.888664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.782 [2024-05-15 16:49:29.888675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.782 [2024-05-15 16:49:29.888689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a6b120) 00:28:22.782 [2024-05-15 16:49:29.888702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.782 [2024-05-15 16:49:29.888731] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4770, cid 4, qid 0 00:28:22.782 [2024-05-15 16:49:29.888743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac48d0, cid 5, qid 0 00:28:22.782 [2024-05-15 16:49:29.888909] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.782 [2024-05-15 16:49:29.888924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.782 [2024-05-15 16:49:29.888932] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.888938] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6b120): datao=0, datal=1024, cccid=4 00:28:22.783 [2024-05-15 16:49:29.888946] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4770) on tqpair(0x1a6b120): expected_datao=0, payload_size=1024 00:28:22.783 [2024-05-15 16:49:29.888954] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.888964] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.888971] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.888980] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.783 [2024-05-15 16:49:29.888989] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.783 [2024-05-15 16:49:29.888996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.889003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac48d0) on tqpair=0x1a6b120 00:28:22.783 [2024-05-15 16:49:29.933230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.783 [2024-05-15 16:49:29.933248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.783 [2024-05-15 16:49:29.933256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4770) on tqpair=0x1a6b120 00:28:22.783 [2024-05-15 16:49:29.933286] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6b120) 00:28:22.783 [2024-05-15 16:49:29.933308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.783 [2024-05-15 16:49:29.933338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4770, cid 4, qid 0 00:28:22.783 [2024-05-15 16:49:29.933478] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.783 [2024-05-15 16:49:29.933494] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.783 [2024-05-15 16:49:29.933501] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933508] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6b120): datao=0, datal=3072, cccid=4 00:28:22.783 [2024-05-15 16:49:29.933516] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4770) on tqpair(0x1a6b120): expected_datao=0, payload_size=3072 00:28:22.783 [2024-05-15 16:49:29.933524] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933534] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933542] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.783 [2024-05-15 16:49:29.933565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.783 [2024-05-15 16:49:29.933572] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933579] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4770) on tqpair=0x1a6b120 00:28:22.783 [2024-05-15 16:49:29.933595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6b120) 00:28:22.783 [2024-05-15 16:49:29.933619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.783 [2024-05-15 16:49:29.933647] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4770, cid 4, qid 0 00:28:22.783 [2024-05-15 16:49:29.933775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.783 [2024-05-15 16:49:29.933787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.783 [2024-05-15 16:49:29.933794] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933801] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6b120): datao=0, datal=8, cccid=4 00:28:22.783 [2024-05-15 16:49:29.933809] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4770) on tqpair(0x1a6b120): expected_datao=0, payload_size=8 00:28:22.783 [2024-05-15 16:49:29.933817] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933827] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.933834] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.974315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.783 [2024-05-15 16:49:29.974333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.783 [2024-05-15 16:49:29.974341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.783 [2024-05-15 16:49:29.974348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4770) on tqpair=0x1a6b120 00:28:22.783 ===================================================== 00:28:22.783 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:22.783 ===================================================== 00:28:22.783 Controller Capabilities/Features 00:28:22.783 ================================ 00:28:22.783 Vendor ID: 0000 00:28:22.783 Subsystem Vendor ID: 0000 00:28:22.783 Serial Number: .................... 00:28:22.783 Model Number: ........................................ 00:28:22.783 Firmware Version: 24.05 00:28:22.783 Recommended Arb Burst: 0 00:28:22.783 IEEE OUI Identifier: 00 00 00 00:28:22.783 Multi-path I/O 00:28:22.783 May have multiple subsystem ports: No 00:28:22.783 May have multiple controllers: No 00:28:22.783 Associated with SR-IOV VF: No 00:28:22.783 Max Data Transfer Size: 131072 00:28:22.783 Max Number of Namespaces: 0 00:28:22.783 Max Number of I/O Queues: 1024 00:28:22.783 NVMe Specification Version (VS): 1.3 00:28:22.783 NVMe Specification Version (Identify): 1.3 00:28:22.783 Maximum Queue Entries: 128 00:28:22.783 Contiguous Queues Required: Yes 00:28:22.783 Arbitration Mechanisms Supported 00:28:22.783 Weighted Round Robin: Not Supported 00:28:22.783 Vendor Specific: Not Supported 00:28:22.783 Reset Timeout: 15000 ms 00:28:22.783 Doorbell Stride: 4 bytes 00:28:22.783 NVM Subsystem Reset: Not Supported 00:28:22.783 Command Sets Supported 00:28:22.783 NVM Command Set: Supported 00:28:22.783 Boot Partition: Not Supported 00:28:22.783 Memory Page Size Minimum: 4096 bytes 00:28:22.783 Memory Page Size Maximum: 4096 bytes 00:28:22.783 Persistent Memory Region: Not Supported 00:28:22.783 Optional Asynchronous Events Supported 00:28:22.783 Namespace Attribute Notices: Not Supported 00:28:22.783 Firmware Activation Notices: Not Supported 00:28:22.783 ANA Change Notices: Not Supported 00:28:22.783 PLE Aggregate Log Change Notices: Not Supported 00:28:22.783 LBA Status Info Alert Notices: Not Supported 00:28:22.783 EGE Aggregate Log Change Notices: Not Supported 00:28:22.783 Normal NVM Subsystem Shutdown event: Not Supported 00:28:22.783 Zone Descriptor Change Notices: Not Supported 00:28:22.783 Discovery Log Change Notices: Supported 00:28:22.783 Controller Attributes 00:28:22.783 128-bit Host Identifier: Not Supported 00:28:22.783 Non-Operational Permissive Mode: Not Supported 00:28:22.783 NVM Sets: Not Supported 00:28:22.783 Read Recovery Levels: Not Supported 00:28:22.783 Endurance Groups: Not Supported 00:28:22.783 Predictable Latency Mode: Not Supported 00:28:22.783 Traffic Based Keep ALive: Not Supported 00:28:22.783 Namespace Granularity: Not Supported 00:28:22.783 SQ Associations: Not Supported 00:28:22.783 UUID List: Not Supported 00:28:22.783 Multi-Domain Subsystem: Not Supported 00:28:22.783 Fixed Capacity Management: Not Supported 00:28:22.783 Variable Capacity Management: Not Supported 00:28:22.783 Delete Endurance Group: Not Supported 00:28:22.783 Delete NVM Set: Not Supported 00:28:22.783 Extended LBA Formats Supported: Not Supported 00:28:22.783 Flexible Data Placement Supported: Not Supported 00:28:22.783 00:28:22.783 Controller Memory Buffer Support 00:28:22.783 ================================ 00:28:22.783 Supported: No 00:28:22.783 00:28:22.783 Persistent Memory Region Support 00:28:22.783 ================================ 00:28:22.783 Supported: No 00:28:22.783 00:28:22.783 Admin Command Set Attributes 00:28:22.783 ============================ 00:28:22.783 Security Send/Receive: Not Supported 00:28:22.783 Format NVM: Not Supported 00:28:22.783 Firmware Activate/Download: Not Supported 00:28:22.783 Namespace Management: Not Supported 00:28:22.783 Device Self-Test: Not Supported 00:28:22.783 Directives: Not Supported 00:28:22.783 NVMe-MI: Not Supported 00:28:22.783 Virtualization Management: Not Supported 00:28:22.783 Doorbell Buffer Config: Not Supported 00:28:22.783 Get LBA Status Capability: Not Supported 00:28:22.783 Command & Feature Lockdown Capability: Not Supported 00:28:22.783 Abort Command Limit: 1 00:28:22.783 Async Event Request Limit: 4 00:28:22.783 Number of Firmware Slots: N/A 00:28:22.783 Firmware Slot 1 Read-Only: N/A 00:28:22.783 Firmware Activation Without Reset: N/A 00:28:22.783 Multiple Update Detection Support: N/A 00:28:22.783 Firmware Update Granularity: No Information Provided 00:28:22.783 Per-Namespace SMART Log: No 00:28:22.783 Asymmetric Namespace Access Log Page: Not Supported 00:28:22.784 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:22.784 Command Effects Log Page: Not Supported 00:28:22.784 Get Log Page Extended Data: Supported 00:28:22.784 Telemetry Log Pages: Not Supported 00:28:22.784 Persistent Event Log Pages: Not Supported 00:28:22.784 Supported Log Pages Log Page: May Support 00:28:22.784 Commands Supported & Effects Log Page: Not Supported 00:28:22.784 Feature Identifiers & Effects Log Page:May Support 00:28:22.784 NVMe-MI Commands & Effects Log Page: May Support 00:28:22.784 Data Area 4 for Telemetry Log: Not Supported 00:28:22.784 Error Log Page Entries Supported: 128 00:28:22.784 Keep Alive: Not Supported 00:28:22.784 00:28:22.784 NVM Command Set Attributes 00:28:22.784 ========================== 00:28:22.784 Submission Queue Entry Size 00:28:22.784 Max: 1 00:28:22.784 Min: 1 00:28:22.784 Completion Queue Entry Size 00:28:22.784 Max: 1 00:28:22.784 Min: 1 00:28:22.784 Number of Namespaces: 0 00:28:22.784 Compare Command: Not Supported 00:28:22.784 Write Uncorrectable Command: Not Supported 00:28:22.784 Dataset Management Command: Not Supported 00:28:22.784 Write Zeroes Command: Not Supported 00:28:22.784 Set Features Save Field: Not Supported 00:28:22.784 Reservations: Not Supported 00:28:22.784 Timestamp: Not Supported 00:28:22.784 Copy: Not Supported 00:28:22.784 Volatile Write Cache: Not Present 00:28:22.784 Atomic Write Unit (Normal): 1 00:28:22.784 Atomic Write Unit (PFail): 1 00:28:22.784 Atomic Compare & Write Unit: 1 00:28:22.784 Fused Compare & Write: Supported 00:28:22.784 Scatter-Gather List 00:28:22.784 SGL Command Set: Supported 00:28:22.784 SGL Keyed: Supported 00:28:22.784 SGL Bit Bucket Descriptor: Not Supported 00:28:22.784 SGL Metadata Pointer: Not Supported 00:28:22.784 Oversized SGL: Not Supported 00:28:22.784 SGL Metadata Address: Not Supported 00:28:22.784 SGL Offset: Supported 00:28:22.784 Transport SGL Data Block: Not Supported 00:28:22.784 Replay Protected Memory Block: Not Supported 00:28:22.784 00:28:22.784 Firmware Slot Information 00:28:22.784 ========================= 00:28:22.784 Active slot: 0 00:28:22.784 00:28:22.784 00:28:22.784 Error Log 00:28:22.784 ========= 00:28:22.784 00:28:22.784 Active Namespaces 00:28:22.784 ================= 00:28:22.784 Discovery Log Page 00:28:22.784 ================== 00:28:22.784 Generation Counter: 2 00:28:22.784 Number of Records: 2 00:28:22.784 Record Format: 0 00:28:22.784 00:28:22.784 Discovery Log Entry 0 00:28:22.784 ---------------------- 00:28:22.784 Transport Type: 3 (TCP) 00:28:22.784 Address Family: 1 (IPv4) 00:28:22.784 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:22.784 Entry Flags: 00:28:22.784 Duplicate Returned Information: 1 00:28:22.784 Explicit Persistent Connection Support for Discovery: 1 00:28:22.784 Transport Requirements: 00:28:22.784 Secure Channel: Not Required 00:28:22.784 Port ID: 0 (0x0000) 00:28:22.784 Controller ID: 65535 (0xffff) 00:28:22.784 Admin Max SQ Size: 128 00:28:22.784 Transport Service Identifier: 4420 00:28:22.784 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:22.784 Transport Address: 10.0.0.2 00:28:22.784 Discovery Log Entry 1 00:28:22.784 ---------------------- 00:28:22.784 Transport Type: 3 (TCP) 00:28:22.784 Address Family: 1 (IPv4) 00:28:22.784 Subsystem Type: 2 (NVM Subsystem) 00:28:22.784 Entry Flags: 00:28:22.784 Duplicate Returned Information: 0 00:28:22.784 Explicit Persistent Connection Support for Discovery: 0 00:28:22.784 Transport Requirements: 00:28:22.784 Secure Channel: Not Required 00:28:22.784 Port ID: 0 (0x0000) 00:28:22.784 Controller ID: 65535 (0xffff) 00:28:22.784 Admin Max SQ Size: 128 00:28:22.784 Transport Service Identifier: 4420 00:28:22.784 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:22.784 Transport Address: 10.0.0.2 [2024-05-15 16:49:29.974466] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:22.784 [2024-05-15 16:49:29.974490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.784 [2024-05-15 16:49:29.974504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.784 [2024-05-15 16:49:29.974514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.784 [2024-05-15 16:49:29.974524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.784 [2024-05-15 16:49:29.974538] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974552] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.784 [2024-05-15 16:49:29.974563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.784 [2024-05-15 16:49:29.974587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.784 [2024-05-15 16:49:29.974699] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.784 [2024-05-15 16:49:29.974711] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.784 [2024-05-15 16:49:29.974718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974725] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.784 [2024-05-15 16:49:29.974743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974752] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974758] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.784 [2024-05-15 16:49:29.974769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.784 [2024-05-15 16:49:29.974795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.784 [2024-05-15 16:49:29.974922] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.784 [2024-05-15 16:49:29.974938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.784 [2024-05-15 16:49:29.974946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974953] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.784 [2024-05-15 16:49:29.974962] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:22.784 [2024-05-15 16:49:29.974970] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:22.784 [2024-05-15 16:49:29.974987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.974996] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.975002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.784 [2024-05-15 16:49:29.975013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.784 [2024-05-15 16:49:29.975033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.784 [2024-05-15 16:49:29.975159] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.784 [2024-05-15 16:49:29.975174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.784 [2024-05-15 16:49:29.975181] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.975188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.784 [2024-05-15 16:49:29.975206] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.975243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.784 [2024-05-15 16:49:29.975253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.784 [2024-05-15 16:49:29.975265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.784 [2024-05-15 16:49:29.975287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.784 [2024-05-15 16:49:29.975413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.784 [2024-05-15 16:49:29.975429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.784 [2024-05-15 16:49:29.975436] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975443] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.785 [2024-05-15 16:49:29.975461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.785 [2024-05-15 16:49:29.975487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.785 [2024-05-15 16:49:29.975508] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.785 [2024-05-15 16:49:29.975628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.785 [2024-05-15 16:49:29.975643] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.785 [2024-05-15 16:49:29.975650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.785 [2024-05-15 16:49:29.975675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.785 [2024-05-15 16:49:29.975701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.785 [2024-05-15 16:49:29.975726] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.785 [2024-05-15 16:49:29.975849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.785 [2024-05-15 16:49:29.975861] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.785 [2024-05-15 16:49:29.975868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.785 [2024-05-15 16:49:29.975892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975901] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.975908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.785 [2024-05-15 16:49:29.975918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.785 [2024-05-15 16:49:29.975938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.785 [2024-05-15 16:49:29.976061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.785 [2024-05-15 16:49:29.976076] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.785 [2024-05-15 16:49:29.976084] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.785 [2024-05-15 16:49:29.976108] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976124] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.785 [2024-05-15 16:49:29.976135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.785 [2024-05-15 16:49:29.976155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.785 [2024-05-15 16:49:29.976281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.785 [2024-05-15 16:49:29.976296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.785 [2024-05-15 16:49:29.976303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.785 [2024-05-15 16:49:29.976328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976337] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976343] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.785 [2024-05-15 16:49:29.976354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.785 [2024-05-15 16:49:29.976374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.785 [2024-05-15 16:49:29.976488] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.785 [2024-05-15 16:49:29.976503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.785 [2024-05-15 16:49:29.976510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976517] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.785 [2024-05-15 16:49:29.976535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976544] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.785 [2024-05-15 16:49:29.976551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.785 [2024-05-15 16:49:29.976561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.785 [2024-05-15 16:49:29.976581] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.785 [2024-05-15 16:49:29.976696] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.785 [2024-05-15 16:49:29.976711] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.786 [2024-05-15 16:49:29.976718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.976725] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.786 [2024-05-15 16:49:29.976743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.976752] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.976759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.786 [2024-05-15 16:49:29.976770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.786 [2024-05-15 16:49:29.976790] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.786 [2024-05-15 16:49:29.976914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.786 [2024-05-15 16:49:29.976926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.786 [2024-05-15 16:49:29.976933] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.976940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.786 [2024-05-15 16:49:29.976957] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.976966] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.976973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.786 [2024-05-15 16:49:29.976983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.786 [2024-05-15 16:49:29.977003] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.786 [2024-05-15 16:49:29.977161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.786 [2024-05-15 16:49:29.977176] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.786 [2024-05-15 16:49:29.977183] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.977190] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.786 [2024-05-15 16:49:29.977208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.981226] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.981236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6b120) 00:28:22.786 [2024-05-15 16:49:29.981248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.786 [2024-05-15 16:49:29.981269] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4610, cid 3, qid 0 00:28:22.786 [2024-05-15 16:49:29.981411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.786 [2024-05-15 16:49:29.981426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.786 [2024-05-15 16:49:29.981433] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.786 [2024-05-15 16:49:29.981440] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ac4610) on tqpair=0x1a6b120 00:28:22.786 [2024-05-15 16:49:29.981455] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:22.786 00:28:22.786 16:49:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:23.048 [2024-05-15 16:49:30.015742] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:23.048 [2024-05-15 16:49:30.015793] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875052 ] 00:28:23.048 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.048 [2024-05-15 16:49:30.052709] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:23.048 [2024-05-15 16:49:30.052770] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:23.048 [2024-05-15 16:49:30.052780] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:23.048 [2024-05-15 16:49:30.052796] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:23.048 [2024-05-15 16:49:30.052811] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:23.048 [2024-05-15 16:49:30.053055] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:23.048 [2024-05-15 16:49:30.053103] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc57120 0 00:28:23.048 [2024-05-15 16:49:30.059236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:23.048 [2024-05-15 16:49:30.059256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:23.048 [2024-05-15 16:49:30.059264] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:23.048 [2024-05-15 16:49:30.059271] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:23.048 [2024-05-15 16:49:30.059327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.059339] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.059347] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.059363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:23.048 [2024-05-15 16:49:30.059389] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.067229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.067247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.067255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.048 [2024-05-15 16:49:30.067278] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:23.048 [2024-05-15 16:49:30.067289] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:23.048 [2024-05-15 16:49:30.067298] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:23.048 [2024-05-15 16:49:30.067319] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067334] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.067346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.048 [2024-05-15 16:49:30.067369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.067495] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.067508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.067515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.048 [2024-05-15 16:49:30.067538] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:23.048 [2024-05-15 16:49:30.067552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:23.048 [2024-05-15 16:49:30.067565] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067572] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067579] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.067589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.048 [2024-05-15 16:49:30.067611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.067738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.067753] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.067760] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.048 [2024-05-15 16:49:30.067776] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:23.048 [2024-05-15 16:49:30.067790] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:23.048 [2024-05-15 16:49:30.067803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067811] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067817] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.067828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.048 [2024-05-15 16:49:30.067849] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.067951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.067963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.067970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.067977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.048 [2024-05-15 16:49:30.067986] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:23.048 [2024-05-15 16:49:30.068003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.068029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.048 [2024-05-15 16:49:30.068049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.068174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.068187] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.068195] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068202] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.048 [2024-05-15 16:49:30.068210] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:23.048 [2024-05-15 16:49:30.068230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:23.048 [2024-05-15 16:49:30.068249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:23.048 [2024-05-15 16:49:30.068360] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:23.048 [2024-05-15 16:49:30.068368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:23.048 [2024-05-15 16:49:30.068381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.068406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.048 [2024-05-15 16:49:30.068427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.068553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.068568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.068575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068582] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.048 [2024-05-15 16:49:30.068591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:23.048 [2024-05-15 16:49:30.068608] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.048 [2024-05-15 16:49:30.068623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.048 [2024-05-15 16:49:30.068634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.048 [2024-05-15 16:49:30.068655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.048 [2024-05-15 16:49:30.068781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.048 [2024-05-15 16:49:30.068796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.048 [2024-05-15 16:49:30.068803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.068810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.049 [2024-05-15 16:49:30.068818] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:23.049 [2024-05-15 16:49:30.068827] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.068841] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:23.049 [2024-05-15 16:49:30.068855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.068869] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.068877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.068888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.049 [2024-05-15 16:49:30.068909] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.049 [2024-05-15 16:49:30.069075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.049 [2024-05-15 16:49:30.069091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.049 [2024-05-15 16:49:30.069102] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.069110] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=4096, cccid=0 00:28:23.049 [2024-05-15 16:49:30.069118] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb01f0) on tqpair(0xc57120): expected_datao=0, payload_size=4096 00:28:23.049 [2024-05-15 16:49:30.069126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.069144] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.069154] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109326] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.049 [2024-05-15 16:49:30.109345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.049 [2024-05-15 16:49:30.109352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109360] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.049 [2024-05-15 16:49:30.109372] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:23.049 [2024-05-15 16:49:30.109381] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:23.049 [2024-05-15 16:49:30.109389] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:23.049 [2024-05-15 16:49:30.109401] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:23.049 [2024-05-15 16:49:30.109411] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:23.049 [2024-05-15 16:49:30.109419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.109434] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.109447] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109455] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.109474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:23.049 [2024-05-15 16:49:30.109496] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.049 [2024-05-15 16:49:30.109602] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.049 [2024-05-15 16:49:30.109614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.049 [2024-05-15 16:49:30.109622] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb01f0) on tqpair=0xc57120 00:28:23.049 [2024-05-15 16:49:30.109640] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109655] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.109665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.049 [2024-05-15 16:49:30.109675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.109698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.049 [2024-05-15 16:49:30.109712] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109720] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.109736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.049 [2024-05-15 16:49:30.109745] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109752] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.109768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.049 [2024-05-15 16:49:30.109777] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.109796] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.109809] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.109816] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.109827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.049 [2024-05-15 16:49:30.109849] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb01f0, cid 0, qid 0 00:28:23.049 [2024-05-15 16:49:30.109860] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0350, cid 1, qid 0 00:28:23.049 [2024-05-15 16:49:30.109868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb04b0, cid 2, qid 0 00:28:23.049 [2024-05-15 16:49:30.109876] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0610, cid 3, qid 0 00:28:23.049 [2024-05-15 16:49:30.109884] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.049 [2024-05-15 16:49:30.110020] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.049 [2024-05-15 16:49:30.110035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.049 [2024-05-15 16:49:30.110043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.110050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.049 [2024-05-15 16:49:30.110059] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:23.049 [2024-05-15 16:49:30.110068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.110082] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.110093] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.110104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.110112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.110118] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.110129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:23.049 [2024-05-15 16:49:30.110150] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.049 [2024-05-15 16:49:30.114225] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.049 [2024-05-15 16:49:30.114243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.049 [2024-05-15 16:49:30.114254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.114262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.049 [2024-05-15 16:49:30.114320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.114340] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.114355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.114363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.049 [2024-05-15 16:49:30.114374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.049 [2024-05-15 16:49:30.114396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.049 [2024-05-15 16:49:30.114535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.049 [2024-05-15 16:49:30.114548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.049 [2024-05-15 16:49:30.114555] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.114562] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=4096, cccid=4 00:28:23.049 [2024-05-15 16:49:30.114570] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb0770) on tqpair(0xc57120): expected_datao=0, payload_size=4096 00:28:23.049 [2024-05-15 16:49:30.114578] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.114594] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.114603] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.156228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.049 [2024-05-15 16:49:30.156248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.049 [2024-05-15 16:49:30.156256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.049 [2024-05-15 16:49:30.156263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.049 [2024-05-15 16:49:30.156279] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:23.049 [2024-05-15 16:49:30.156297] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.156316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:23.049 [2024-05-15 16:49:30.156331] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.156339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.156350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.156373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.050 [2024-05-15 16:49:30.156549] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.050 [2024-05-15 16:49:30.156564] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.050 [2024-05-15 16:49:30.156572] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.156579] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=4096, cccid=4 00:28:23.050 [2024-05-15 16:49:30.156587] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb0770) on tqpair(0xc57120): expected_datao=0, payload_size=4096 00:28:23.050 [2024-05-15 16:49:30.156595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.156605] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.156620] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.197352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.197372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.197380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.197387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.197410] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.197429] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.197444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.197452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.197463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.197486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.050 [2024-05-15 16:49:30.197607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.050 [2024-05-15 16:49:30.197623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.050 [2024-05-15 16:49:30.197631] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.197637] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=4096, cccid=4 00:28:23.050 [2024-05-15 16:49:30.197645] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb0770) on tqpair(0xc57120): expected_datao=0, payload_size=4096 00:28:23.050 [2024-05-15 16:49:30.197653] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.197670] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.197679] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.241250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.241258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241265] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.241279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.241295] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.241312] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.241323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.241332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.241341] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:23.050 [2024-05-15 16:49:30.241349] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:23.050 [2024-05-15 16:49:30.241358] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:23.050 [2024-05-15 16:49:30.241386] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241400] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.241412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.241424] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.241447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.050 [2024-05-15 16:49:30.241475] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.050 [2024-05-15 16:49:30.241487] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb08d0, cid 5, qid 0 00:28:23.050 [2024-05-15 16:49:30.241609] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.241621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.241629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.241647] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.241657] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.241663] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb08d0) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.241686] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241695] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.241705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.241726] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb08d0, cid 5, qid 0 00:28:23.050 [2024-05-15 16:49:30.241840] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.241856] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.241863] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb08d0) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.241887] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.241896] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.241907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.241928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb08d0, cid 5, qid 0 00:28:23.050 [2024-05-15 16:49:30.242042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.242057] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.242065] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb08d0) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.242088] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242097] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.242108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.242132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb08d0, cid 5, qid 0 00:28:23.050 [2024-05-15 16:49:30.242234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.050 [2024-05-15 16:49:30.242248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.050 [2024-05-15 16:49:30.242256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb08d0) on tqpair=0xc57120 00:28:23.050 [2024-05-15 16:49:30.242283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.242304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.242315] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242323] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.242333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.242344] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.242361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.242374] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc57120) 00:28:23.050 [2024-05-15 16:49:30.242391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.050 [2024-05-15 16:49:30.242413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb08d0, cid 5, qid 0 00:28:23.050 [2024-05-15 16:49:30.242424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0770, cid 4, qid 0 00:28:23.050 [2024-05-15 16:49:30.242432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0a30, cid 6, qid 0 00:28:23.050 [2024-05-15 16:49:30.242440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0b90, cid 7, qid 0 00:28:23.050 [2024-05-15 16:49:30.242630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.050 [2024-05-15 16:49:30.242643] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.050 [2024-05-15 16:49:30.242651] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.050 [2024-05-15 16:49:30.242657] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=8192, cccid=5 00:28:23.050 [2024-05-15 16:49:30.242665] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb08d0) on tqpair(0xc57120): expected_datao=0, payload_size=8192 00:28:23.051 [2024-05-15 16:49:30.242673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242721] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242731] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242740] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.051 [2024-05-15 16:49:30.242750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.051 [2024-05-15 16:49:30.242757] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242764] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=512, cccid=4 00:28:23.051 [2024-05-15 16:49:30.242772] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb0770) on tqpair(0xc57120): expected_datao=0, payload_size=512 00:28:23.051 [2024-05-15 16:49:30.242784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242794] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242801] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.051 [2024-05-15 16:49:30.242819] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.051 [2024-05-15 16:49:30.242826] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242833] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=512, cccid=6 00:28:23.051 [2024-05-15 16:49:30.242841] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb0a30) on tqpair(0xc57120): expected_datao=0, payload_size=512 00:28:23.051 [2024-05-15 16:49:30.242848] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242858] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242865] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.051 [2024-05-15 16:49:30.242883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.051 [2024-05-15 16:49:30.242890] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242897] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57120): datao=0, datal=4096, cccid=7 00:28:23.051 [2024-05-15 16:49:30.242904] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb0b90) on tqpair(0xc57120): expected_datao=0, payload_size=4096 00:28:23.051 [2024-05-15 16:49:30.242912] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242922] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242930] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.051 [2024-05-15 16:49:30.242952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.051 [2024-05-15 16:49:30.242959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.242966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb08d0) on tqpair=0xc57120 00:28:23.051 [2024-05-15 16:49:30.242986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.051 [2024-05-15 16:49:30.242998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.051 [2024-05-15 16:49:30.243005] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.243012] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0770) on tqpair=0xc57120 00:28:23.051 [2024-05-15 16:49:30.243026] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.051 [2024-05-15 16:49:30.243037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.051 [2024-05-15 16:49:30.243044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.243050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0a30) on tqpair=0xc57120 00:28:23.051 [2024-05-15 16:49:30.243065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.051 [2024-05-15 16:49:30.243091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.051 [2024-05-15 16:49:30.243098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.051 [2024-05-15 16:49:30.243105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0b90) on tqpair=0xc57120 00:28:23.051 ===================================================== 00:28:23.051 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.051 ===================================================== 00:28:23.051 Controller Capabilities/Features 00:28:23.051 ================================ 00:28:23.051 Vendor ID: 8086 00:28:23.051 Subsystem Vendor ID: 8086 00:28:23.051 Serial Number: SPDK00000000000001 00:28:23.051 Model Number: SPDK bdev Controller 00:28:23.051 Firmware Version: 24.05 00:28:23.051 Recommended Arb Burst: 6 00:28:23.051 IEEE OUI Identifier: e4 d2 5c 00:28:23.051 Multi-path I/O 00:28:23.051 May have multiple subsystem ports: Yes 00:28:23.051 May have multiple controllers: Yes 00:28:23.051 Associated with SR-IOV VF: No 00:28:23.051 Max Data Transfer Size: 131072 00:28:23.051 Max Number of Namespaces: 32 00:28:23.051 Max Number of I/O Queues: 127 00:28:23.051 NVMe Specification Version (VS): 1.3 00:28:23.051 NVMe Specification Version (Identify): 1.3 00:28:23.051 Maximum Queue Entries: 128 00:28:23.051 Contiguous Queues Required: Yes 00:28:23.051 Arbitration Mechanisms Supported 00:28:23.051 Weighted Round Robin: Not Supported 00:28:23.051 Vendor Specific: Not Supported 00:28:23.051 Reset Timeout: 15000 ms 00:28:23.051 Doorbell Stride: 4 bytes 00:28:23.051 NVM Subsystem Reset: Not Supported 00:28:23.051 Command Sets Supported 00:28:23.051 NVM Command Set: Supported 00:28:23.051 Boot Partition: Not Supported 00:28:23.051 Memory Page Size Minimum: 4096 bytes 00:28:23.051 Memory Page Size Maximum: 4096 bytes 00:28:23.051 Persistent Memory Region: Not Supported 00:28:23.051 Optional Asynchronous Events Supported 00:28:23.051 Namespace Attribute Notices: Supported 00:28:23.051 Firmware Activation Notices: Not Supported 00:28:23.051 ANA Change Notices: Not Supported 00:28:23.051 PLE Aggregate Log Change Notices: Not Supported 00:28:23.051 LBA Status Info Alert Notices: Not Supported 00:28:23.051 EGE Aggregate Log Change Notices: Not Supported 00:28:23.051 Normal NVM Subsystem Shutdown event: Not Supported 00:28:23.051 Zone Descriptor Change Notices: Not Supported 00:28:23.051 Discovery Log Change Notices: Not Supported 00:28:23.051 Controller Attributes 00:28:23.051 128-bit Host Identifier: Supported 00:28:23.051 Non-Operational Permissive Mode: Not Supported 00:28:23.051 NVM Sets: Not Supported 00:28:23.051 Read Recovery Levels: Not Supported 00:28:23.051 Endurance Groups: Not Supported 00:28:23.051 Predictable Latency Mode: Not Supported 00:28:23.051 Traffic Based Keep ALive: Not Supported 00:28:23.051 Namespace Granularity: Not Supported 00:28:23.051 SQ Associations: Not Supported 00:28:23.051 UUID List: Not Supported 00:28:23.051 Multi-Domain Subsystem: Not Supported 00:28:23.051 Fixed Capacity Management: Not Supported 00:28:23.051 Variable Capacity Management: Not Supported 00:28:23.051 Delete Endurance Group: Not Supported 00:28:23.051 Delete NVM Set: Not Supported 00:28:23.051 Extended LBA Formats Supported: Not Supported 00:28:23.051 Flexible Data Placement Supported: Not Supported 00:28:23.051 00:28:23.051 Controller Memory Buffer Support 00:28:23.051 ================================ 00:28:23.051 Supported: No 00:28:23.051 00:28:23.051 Persistent Memory Region Support 00:28:23.051 ================================ 00:28:23.051 Supported: No 00:28:23.051 00:28:23.051 Admin Command Set Attributes 00:28:23.051 ============================ 00:28:23.051 Security Send/Receive: Not Supported 00:28:23.051 Format NVM: Not Supported 00:28:23.051 Firmware Activate/Download: Not Supported 00:28:23.051 Namespace Management: Not Supported 00:28:23.051 Device Self-Test: Not Supported 00:28:23.051 Directives: Not Supported 00:28:23.051 NVMe-MI: Not Supported 00:28:23.051 Virtualization Management: Not Supported 00:28:23.051 Doorbell Buffer Config: Not Supported 00:28:23.051 Get LBA Status Capability: Not Supported 00:28:23.051 Command & Feature Lockdown Capability: Not Supported 00:28:23.051 Abort Command Limit: 4 00:28:23.051 Async Event Request Limit: 4 00:28:23.051 Number of Firmware Slots: N/A 00:28:23.051 Firmware Slot 1 Read-Only: N/A 00:28:23.051 Firmware Activation Without Reset: N/A 00:28:23.051 Multiple Update Detection Support: N/A 00:28:23.051 Firmware Update Granularity: No Information Provided 00:28:23.051 Per-Namespace SMART Log: No 00:28:23.051 Asymmetric Namespace Access Log Page: Not Supported 00:28:23.051 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:23.051 Command Effects Log Page: Supported 00:28:23.051 Get Log Page Extended Data: Supported 00:28:23.051 Telemetry Log Pages: Not Supported 00:28:23.051 Persistent Event Log Pages: Not Supported 00:28:23.051 Supported Log Pages Log Page: May Support 00:28:23.051 Commands Supported & Effects Log Page: Not Supported 00:28:23.051 Feature Identifiers & Effects Log Page:May Support 00:28:23.051 NVMe-MI Commands & Effects Log Page: May Support 00:28:23.051 Data Area 4 for Telemetry Log: Not Supported 00:28:23.051 Error Log Page Entries Supported: 128 00:28:23.051 Keep Alive: Supported 00:28:23.051 Keep Alive Granularity: 10000 ms 00:28:23.051 00:28:23.051 NVM Command Set Attributes 00:28:23.051 ========================== 00:28:23.051 Submission Queue Entry Size 00:28:23.051 Max: 64 00:28:23.051 Min: 64 00:28:23.051 Completion Queue Entry Size 00:28:23.051 Max: 16 00:28:23.051 Min: 16 00:28:23.051 Number of Namespaces: 32 00:28:23.051 Compare Command: Supported 00:28:23.051 Write Uncorrectable Command: Not Supported 00:28:23.051 Dataset Management Command: Supported 00:28:23.051 Write Zeroes Command: Supported 00:28:23.051 Set Features Save Field: Not Supported 00:28:23.051 Reservations: Supported 00:28:23.051 Timestamp: Not Supported 00:28:23.051 Copy: Supported 00:28:23.051 Volatile Write Cache: Present 00:28:23.051 Atomic Write Unit (Normal): 1 00:28:23.051 Atomic Write Unit (PFail): 1 00:28:23.051 Atomic Compare & Write Unit: 1 00:28:23.052 Fused Compare & Write: Supported 00:28:23.052 Scatter-Gather List 00:28:23.052 SGL Command Set: Supported 00:28:23.052 SGL Keyed: Supported 00:28:23.052 SGL Bit Bucket Descriptor: Not Supported 00:28:23.052 SGL Metadata Pointer: Not Supported 00:28:23.052 Oversized SGL: Not Supported 00:28:23.052 SGL Metadata Address: Not Supported 00:28:23.052 SGL Offset: Supported 00:28:23.052 Transport SGL Data Block: Not Supported 00:28:23.052 Replay Protected Memory Block: Not Supported 00:28:23.052 00:28:23.052 Firmware Slot Information 00:28:23.052 ========================= 00:28:23.052 Active slot: 1 00:28:23.052 Slot 1 Firmware Revision: 24.05 00:28:23.052 00:28:23.052 00:28:23.052 Commands Supported and Effects 00:28:23.052 ============================== 00:28:23.052 Admin Commands 00:28:23.052 -------------- 00:28:23.052 Get Log Page (02h): Supported 00:28:23.052 Identify (06h): Supported 00:28:23.052 Abort (08h): Supported 00:28:23.052 Set Features (09h): Supported 00:28:23.052 Get Features (0Ah): Supported 00:28:23.052 Asynchronous Event Request (0Ch): Supported 00:28:23.052 Keep Alive (18h): Supported 00:28:23.052 I/O Commands 00:28:23.052 ------------ 00:28:23.052 Flush (00h): Supported LBA-Change 00:28:23.052 Write (01h): Supported LBA-Change 00:28:23.052 Read (02h): Supported 00:28:23.052 Compare (05h): Supported 00:28:23.052 Write Zeroes (08h): Supported LBA-Change 00:28:23.052 Dataset Management (09h): Supported LBA-Change 00:28:23.052 Copy (19h): Supported LBA-Change 00:28:23.052 Unknown (79h): Supported LBA-Change 00:28:23.052 Unknown (7Ah): Supported 00:28:23.052 00:28:23.052 Error Log 00:28:23.052 ========= 00:28:23.052 00:28:23.052 Arbitration 00:28:23.052 =========== 00:28:23.052 Arbitration Burst: 1 00:28:23.052 00:28:23.052 Power Management 00:28:23.052 ================ 00:28:23.052 Number of Power States: 1 00:28:23.052 Current Power State: Power State #0 00:28:23.052 Power State #0: 00:28:23.052 Max Power: 0.00 W 00:28:23.052 Non-Operational State: Operational 00:28:23.052 Entry Latency: Not Reported 00:28:23.052 Exit Latency: Not Reported 00:28:23.052 Relative Read Throughput: 0 00:28:23.052 Relative Read Latency: 0 00:28:23.052 Relative Write Throughput: 0 00:28:23.052 Relative Write Latency: 0 00:28:23.052 Idle Power: Not Reported 00:28:23.052 Active Power: Not Reported 00:28:23.052 Non-Operational Permissive Mode: Not Supported 00:28:23.052 00:28:23.052 Health Information 00:28:23.052 ================== 00:28:23.052 Critical Warnings: 00:28:23.052 Available Spare Space: OK 00:28:23.052 Temperature: OK 00:28:23.052 Device Reliability: OK 00:28:23.052 Read Only: No 00:28:23.052 Volatile Memory Backup: OK 00:28:23.052 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:23.052 Temperature Threshold: [2024-05-15 16:49:30.243264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc57120) 00:28:23.052 [2024-05-15 16:49:30.243288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.052 [2024-05-15 16:49:30.243310] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0b90, cid 7, qid 0 00:28:23.052 [2024-05-15 16:49:30.243439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.052 [2024-05-15 16:49:30.243453] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.052 [2024-05-15 16:49:30.243460] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0b90) on tqpair=0xc57120 00:28:23.052 [2024-05-15 16:49:30.243509] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:23.052 [2024-05-15 16:49:30.243533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.052 [2024-05-15 16:49:30.243545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.052 [2024-05-15 16:49:30.243555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.052 [2024-05-15 16:49:30.243565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.052 [2024-05-15 16:49:30.243579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57120) 00:28:23.052 [2024-05-15 16:49:30.243604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.052 [2024-05-15 16:49:30.243627] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0610, cid 3, qid 0 00:28:23.052 [2024-05-15 16:49:30.243732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.052 [2024-05-15 16:49:30.243744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.052 [2024-05-15 16:49:30.243751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0610) on tqpair=0xc57120 00:28:23.052 [2024-05-15 16:49:30.243770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243778] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57120) 00:28:23.052 [2024-05-15 16:49:30.243795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.052 [2024-05-15 16:49:30.243820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0610, cid 3, qid 0 00:28:23.052 [2024-05-15 16:49:30.243953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.052 [2024-05-15 16:49:30.243969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.052 [2024-05-15 16:49:30.243976] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.243983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0610) on tqpair=0xc57120 00:28:23.052 [2024-05-15 16:49:30.243991] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:23.052 [2024-05-15 16:49:30.243999] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:23.052 [2024-05-15 16:49:30.244016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.244025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.244032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57120) 00:28:23.052 [2024-05-15 16:49:30.244042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.052 [2024-05-15 16:49:30.244062] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0610, cid 3, qid 0 00:28:23.052 [2024-05-15 16:49:30.244168] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.052 [2024-05-15 16:49:30.244184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.052 [2024-05-15 16:49:30.244191] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.244198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0610) on tqpair=0xc57120 00:28:23.052 [2024-05-15 16:49:30.248224] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.248240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.248247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57120) 00:28:23.052 [2024-05-15 16:49:30.248258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.052 [2024-05-15 16:49:30.248281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb0610, cid 3, qid 0 00:28:23.052 [2024-05-15 16:49:30.248423] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.052 [2024-05-15 16:49:30.248436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.052 [2024-05-15 16:49:30.248443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.052 [2024-05-15 16:49:30.248450] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcb0610) on tqpair=0xc57120 00:28:23.052 [2024-05-15 16:49:30.248463] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:23.052 0 Kelvin (-273 Celsius) 00:28:23.052 Available Spare: 0% 00:28:23.052 Available Spare Threshold: 0% 00:28:23.053 Life Percentage Used: 0% 00:28:23.053 Data Units Read: 0 00:28:23.053 Data Units Written: 0 00:28:23.053 Host Read Commands: 0 00:28:23.053 Host Write Commands: 0 00:28:23.053 Controller Busy Time: 0 minutes 00:28:23.053 Power Cycles: 0 00:28:23.053 Power On Hours: 0 hours 00:28:23.053 Unsafe Shutdowns: 0 00:28:23.053 Unrecoverable Media Errors: 0 00:28:23.053 Lifetime Error Log Entries: 0 00:28:23.053 Warning Temperature Time: 0 minutes 00:28:23.053 Critical Temperature Time: 0 minutes 00:28:23.053 00:28:23.053 Number of Queues 00:28:23.053 ================ 00:28:23.053 Number of I/O Submission Queues: 127 00:28:23.053 Number of I/O Completion Queues: 127 00:28:23.053 00:28:23.053 Active Namespaces 00:28:23.053 ================= 00:28:23.053 Namespace ID:1 00:28:23.053 Error Recovery Timeout: Unlimited 00:28:23.053 Command Set Identifier: NVM (00h) 00:28:23.053 Deallocate: Supported 00:28:23.053 Deallocated/Unwritten Error: Not Supported 00:28:23.053 Deallocated Read Value: Unknown 00:28:23.053 Deallocate in Write Zeroes: Not Supported 00:28:23.053 Deallocated Guard Field: 0xFFFF 00:28:23.053 Flush: Supported 00:28:23.053 Reservation: Supported 00:28:23.053 Namespace Sharing Capabilities: Multiple Controllers 00:28:23.053 Size (in LBAs): 131072 (0GiB) 00:28:23.053 Capacity (in LBAs): 131072 (0GiB) 00:28:23.053 Utilization (in LBAs): 131072 (0GiB) 00:28:23.053 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:23.053 EUI64: ABCDEF0123456789 00:28:23.053 UUID: 87cf5dab-4e7b-4656-abe5-7fbc92de7bdd 00:28:23.053 Thin Provisioning: Not Supported 00:28:23.053 Per-NS Atomic Units: Yes 00:28:23.053 Atomic Boundary Size (Normal): 0 00:28:23.053 Atomic Boundary Size (PFail): 0 00:28:23.053 Atomic Boundary Offset: 0 00:28:23.053 Maximum Single Source Range Length: 65535 00:28:23.053 Maximum Copy Length: 65535 00:28:23.053 Maximum Source Range Count: 1 00:28:23.053 NGUID/EUI64 Never Reused: No 00:28:23.053 Namespace Write Protected: No 00:28:23.053 Number of LBA Formats: 1 00:28:23.053 Current LBA Format: LBA Format #00 00:28:23.053 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:23.053 00:28:23.053 16:49:30 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:23.053 16:49:30 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.053 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.053 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.311 rmmod nvme_tcp 00:28:23.311 rmmod nvme_fabrics 00:28:23.311 rmmod nvme_keyring 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1875023 ']' 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1875023 00:28:23.311 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1875023 ']' 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1875023 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1875023 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1875023' 00:28:23.312 killing process with pid 1875023 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1875023 00:28:23.312 [2024-05-15 16:49:30.367061] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:23.312 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1875023 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.571 16:49:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.473 16:49:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.473 00:28:25.473 real 0m5.894s 00:28:25.473 user 0m5.014s 00:28:25.473 sys 0m2.124s 00:28:25.473 16:49:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:25.473 16:49:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:25.473 ************************************ 00:28:25.473 END TEST nvmf_identify 00:28:25.473 ************************************ 00:28:25.731 16:49:32 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:25.731 16:49:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:25.731 16:49:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:25.731 16:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.731 ************************************ 00:28:25.731 START TEST nvmf_perf 00:28:25.731 ************************************ 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:25.731 * Looking for test storage... 00:28:25.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.731 16:49:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:28.258 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:28.258 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:28.258 Found net devices under 0000:09:00.0: cvl_0_0 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:28.258 Found net devices under 0000:09:00.1: cvl_0_1 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:28.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:28:28.258 00:28:28.258 --- 10.0.0.2 ping statistics --- 00:28:28.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.258 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:28.258 00:28:28.258 --- 10.0.0.1 ping statistics --- 00:28:28.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.258 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.258 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1877383 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1877383 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1877383 ']' 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:28.259 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.259 [2024-05-15 16:49:35.370471] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:28:28.259 [2024-05-15 16:49:35.370584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.259 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.259 [2024-05-15 16:49:35.446079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.516 [2024-05-15 16:49:35.535873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.516 [2024-05-15 16:49:35.535926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.516 [2024-05-15 16:49:35.535939] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.516 [2024-05-15 16:49:35.535950] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.516 [2024-05-15 16:49:35.535960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.516 [2024-05-15 16:49:35.536010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.516 [2024-05-15 16:49:35.536067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.516 [2024-05-15 16:49:35.536132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.516 [2024-05-15 16:49:35.536135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:28.516 16:49:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:31.793 16:49:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:31.793 16:49:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:32.052 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:28:32.052 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:32.310 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:32.310 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:28:32.310 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:32.310 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:32.310 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:32.568 [2024-05-15 16:49:39.558089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.568 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.825 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:32.825 16:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:33.083 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:33.083 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:33.341 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.341 [2024-05-15 16:49:40.541545] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:33.341 [2024-05-15 16:49:40.541863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.341 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.598 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:28:33.598 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:28:33.598 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:33.598 16:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:28:34.972 Initializing NVMe Controllers 00:28:34.972 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:28:34.972 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:28:34.972 Initialization complete. Launching workers. 00:28:34.972 ======================================================== 00:28:34.972 Latency(us) 00:28:34.972 Device Information : IOPS MiB/s Average min max 00:28:34.972 PCIE (0000:0b:00.0) NSID 1 from core 0: 84082.78 328.45 380.10 22.89 5266.57 00:28:34.972 ======================================================== 00:28:34.972 Total : 84082.78 328.45 380.10 22.89 5266.57 00:28:34.972 00:28:34.972 16:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.972 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.342 Initializing NVMe Controllers 00:28:36.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:36.343 Initialization complete. Launching workers. 00:28:36.343 ======================================================== 00:28:36.343 Latency(us) 00:28:36.343 Device Information : IOPS MiB/s Average min max 00:28:36.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.66 0.38 10321.17 169.35 45795.66 00:28:36.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.79 0.23 17414.10 6100.42 47899.73 00:28:36.343 ======================================================== 00:28:36.343 Total : 156.45 0.61 12986.67 169.35 47899.73 00:28:36.343 00:28:36.343 16:49:43 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.343 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.275 Initializing NVMe Controllers 00:28:37.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:37.275 Initialization complete. Launching workers. 00:28:37.275 ======================================================== 00:28:37.275 Latency(us) 00:28:37.275 Device Information : IOPS MiB/s Average min max 00:28:37.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8423.16 32.90 3799.82 511.80 11336.90 00:28:37.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3860.12 15.08 8291.73 6830.28 19199.20 00:28:37.275 ======================================================== 00:28:37.275 Total : 12283.28 47.98 5211.44 511.80 19199.20 00:28:37.275 00:28:37.275 16:49:44 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:37.275 16:49:44 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:37.275 16:49:44 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.532 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.057 Initializing NVMe Controllers 00:28:40.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.057 Controller IO queue size 128, less than required. 00:28:40.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.057 Controller IO queue size 128, less than required. 00:28:40.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:40.057 Initialization complete. Launching workers. 00:28:40.057 ======================================================== 00:28:40.057 Latency(us) 00:28:40.057 Device Information : IOPS MiB/s Average min max 00:28:40.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1239.47 309.87 105933.66 62492.48 160119.79 00:28:40.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.49 143.37 227131.50 76419.48 368500.21 00:28:40.057 ======================================================== 00:28:40.057 Total : 1812.95 453.24 144271.75 62492.48 368500.21 00:28:40.057 00:28:40.057 16:49:46 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:40.057 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.057 No valid NVMe controllers or AIO or URING devices found 00:28:40.057 Initializing NVMe Controllers 00:28:40.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.057 Controller IO queue size 128, less than required. 00:28:40.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.057 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:40.057 Controller IO queue size 128, less than required. 00:28:40.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.057 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:40.057 WARNING: Some requested NVMe devices were skipped 00:28:40.057 16:49:47 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:40.057 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.374 Initializing NVMe Controllers 00:28:43.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.374 Controller IO queue size 128, less than required. 00:28:43.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.374 Controller IO queue size 128, less than required. 00:28:43.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:43.374 Initialization complete. Launching workers. 00:28:43.374 00:28:43.374 ==================== 00:28:43.374 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:43.374 TCP transport: 00:28:43.374 polls: 14493 00:28:43.374 idle_polls: 6160 00:28:43.374 sock_completions: 8333 00:28:43.374 nvme_completions: 5237 00:28:43.374 submitted_requests: 7850 00:28:43.374 queued_requests: 1 00:28:43.375 00:28:43.375 ==================== 00:28:43.375 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:43.375 TCP transport: 00:28:43.375 polls: 17969 00:28:43.375 idle_polls: 9606 00:28:43.375 sock_completions: 8363 00:28:43.375 nvme_completions: 5355 00:28:43.375 submitted_requests: 7996 00:28:43.375 queued_requests: 1 00:28:43.375 ======================================================== 00:28:43.375 Latency(us) 00:28:43.375 Device Information : IOPS MiB/s Average min max 00:28:43.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1308.66 327.17 100150.17 54887.01 144435.09 00:28:43.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1338.16 334.54 96983.59 53616.95 133129.05 00:28:43.375 ======================================================== 00:28:43.375 Total : 2646.82 661.70 98549.23 53616.95 144435.09 00:28:43.375 00:28:43.375 16:49:49 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:43.375 16:49:49 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.375 16:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:43.375 16:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:28:43.375 16:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=8df2f280-b156-4dc9-8ba9-efceaf759f35 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 8df2f280-b156-4dc9-8ba9-efceaf759f35 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=8df2f280-b156-4dc9-8ba9-efceaf759f35 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:46.653 { 00:28:46.653 "uuid": "8df2f280-b156-4dc9-8ba9-efceaf759f35", 00:28:46.653 "name": "lvs_0", 00:28:46.653 "base_bdev": "Nvme0n1", 00:28:46.653 "total_data_clusters": 238234, 00:28:46.653 "free_clusters": 238234, 00:28:46.653 "block_size": 512, 00:28:46.653 "cluster_size": 4194304 00:28:46.653 } 00:28:46.653 ]' 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="8df2f280-b156-4dc9-8ba9-efceaf759f35") .free_clusters' 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="8df2f280-b156-4dc9-8ba9-efceaf759f35") .cluster_size' 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:46.653 952936 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:46.653 16:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8df2f280-b156-4dc9-8ba9-efceaf759f35 lbd_0 20480 00:28:47.218 16:49:54 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4e635638-b929-4560-92c0-69f30c70bfe3 00:28:47.218 16:49:54 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4e635638-b929-4560-92c0-69f30c70bfe3 lvs_n_0 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=88c4b701-05eb-4771-94b7-2266fe8f9cfc 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 88c4b701-05eb-4771-94b7-2266fe8f9cfc 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=88c4b701-05eb-4771-94b7-2266fe8f9cfc 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:48.148 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:48.405 { 00:28:48.405 "uuid": "8df2f280-b156-4dc9-8ba9-efceaf759f35", 00:28:48.405 "name": "lvs_0", 00:28:48.405 "base_bdev": "Nvme0n1", 00:28:48.405 "total_data_clusters": 238234, 00:28:48.405 "free_clusters": 233114, 00:28:48.405 "block_size": 512, 00:28:48.405 "cluster_size": 4194304 00:28:48.405 }, 00:28:48.405 { 00:28:48.405 "uuid": "88c4b701-05eb-4771-94b7-2266fe8f9cfc", 00:28:48.405 "name": "lvs_n_0", 00:28:48.405 "base_bdev": "4e635638-b929-4560-92c0-69f30c70bfe3", 00:28:48.405 "total_data_clusters": 5114, 00:28:48.405 "free_clusters": 5114, 00:28:48.405 "block_size": 512, 00:28:48.405 "cluster_size": 4194304 00:28:48.405 } 00:28:48.405 ]' 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="88c4b701-05eb-4771-94b7-2266fe8f9cfc") .free_clusters' 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="88c4b701-05eb-4771-94b7-2266fe8f9cfc") .cluster_size' 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:48.405 20456 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:48.405 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 88c4b701-05eb-4771-94b7-2266fe8f9cfc lbd_nest_0 20456 00:28:48.662 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=201b58e4-145b-4688-982a-7a5facefb2f8 00:28:48.662 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.919 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:48.919 16:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 201b58e4-145b-4688-982a-7a5facefb2f8 00:28:49.176 16:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.433 16:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:49.433 16:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:49.434 16:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:49.434 16:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:49.434 16:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.434 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.617 Initializing NVMe Controllers 00:29:01.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.617 Initialization complete. Launching workers. 00:29:01.617 ======================================================== 00:29:01.617 Latency(us) 00:29:01.618 Device Information : IOPS MiB/s Average min max 00:29:01.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.60 0.02 22491.91 201.64 45775.75 00:29:01.618 ======================================================== 00:29:01.618 Total : 44.60 0.02 22491.91 201.64 45775.75 00:29:01.618 00:29:01.618 16:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:01.618 16:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.618 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.572 Initializing NVMe Controllers 00:29:11.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.572 Initialization complete. Launching workers. 00:29:11.572 ======================================================== 00:29:11.572 Latency(us) 00:29:11.572 Device Information : IOPS MiB/s Average min max 00:29:11.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.38 9.92 12597.12 6013.69 48853.33 00:29:11.572 ======================================================== 00:29:11.572 Total : 79.38 9.92 12597.12 6013.69 48853.33 00:29:11.572 00:29:11.572 16:50:17 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:11.572 16:50:17 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:11.572 16:50:17 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.572 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.532 Initializing NVMe Controllers 00:29:21.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.532 Initialization complete. Launching workers. 00:29:21.532 ======================================================== 00:29:21.532 Latency(us) 00:29:21.532 Device Information : IOPS MiB/s Average min max 00:29:21.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7368.60 3.60 4342.90 291.60 11118.11 00:29:21.532 ======================================================== 00:29:21.532 Total : 7368.60 3.60 4342.90 291.60 11118.11 00:29:21.532 00:29:21.532 16:50:27 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:21.532 16:50:27 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.532 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.552 Initializing NVMe Controllers 00:29:31.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.552 Initialization complete. Launching workers. 00:29:31.552 ======================================================== 00:29:31.552 Latency(us) 00:29:31.552 Device Information : IOPS MiB/s Average min max 00:29:31.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2640.90 330.11 12121.67 634.06 28832.76 00:29:31.552 ======================================================== 00:29:31.552 Total : 2640.90 330.11 12121.67 634.06 28832.76 00:29:31.552 00:29:31.552 16:50:37 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:31.552 16:50:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:31.552 16:50:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.552 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.515 Initializing NVMe Controllers 00:29:41.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.515 Controller IO queue size 128, less than required. 00:29:41.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.515 Initialization complete. Launching workers. 00:29:41.515 ======================================================== 00:29:41.515 Latency(us) 00:29:41.515 Device Information : IOPS MiB/s Average min max 00:29:41.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11632.05 5.68 11010.49 1648.61 25714.34 00:29:41.515 ======================================================== 00:29:41.515 Total : 11632.05 5.68 11010.49 1648.61 25714.34 00:29:41.515 00:29:41.515 16:50:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:41.515 16:50:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.515 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.481 Initializing NVMe Controllers 00:29:51.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.481 Controller IO queue size 128, less than required. 00:29:51.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.481 Initialization complete. Launching workers. 00:29:51.481 ======================================================== 00:29:51.481 Latency(us) 00:29:51.481 Device Information : IOPS MiB/s Average min max 00:29:51.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1196.90 149.61 107500.38 23505.85 230988.81 00:29:51.481 ======================================================== 00:29:51.481 Total : 1196.90 149.61 107500.38 23505.85 230988.81 00:29:51.481 00:29:51.481 16:50:58 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.481 16:50:58 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 201b58e4-145b-4688-982a-7a5facefb2f8 00:29:52.411 16:50:59 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:52.411 16:50:59 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e635638-b929-4560-92c0-69f30c70bfe3 00:29:52.668 16:50:59 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.925 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.926 rmmod nvme_tcp 00:29:52.926 rmmod nvme_fabrics 00:29:52.926 rmmod nvme_keyring 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1877383 ']' 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1877383 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1877383 ']' 00:29:52.926 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1877383 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1877383 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1877383' 00:29:53.183 killing process with pid 1877383 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1877383 00:29:53.183 [2024-05-15 16:51:00.179450] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:53.183 16:51:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1877383 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.556 16:51:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.088 16:51:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.088 00:29:57.088 real 1m31.018s 00:29:57.088 user 5m33.610s 00:29:57.088 sys 0m15.978s 00:29:57.088 16:51:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:57.088 16:51:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.088 ************************************ 00:29:57.088 END TEST nvmf_perf 00:29:57.088 ************************************ 00:29:57.088 16:51:03 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:57.088 16:51:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:57.088 16:51:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:57.088 16:51:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.088 ************************************ 00:29:57.088 START TEST nvmf_fio_host 00:29:57.088 ************************************ 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:57.088 * Looking for test storage... 00:29:57.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.088 16:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.089 16:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:59.614 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:59.614 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:59.614 Found net devices under 0000:09:00.0: cvl_0_0 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:59.614 Found net devices under 0000:09:00.1: cvl_0_1 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.614 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:29:59.615 00:29:59.615 --- 10.0.0.2 ping statistics --- 00:29:59.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.615 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:29:59.615 00:29:59.615 --- 10.0.0.1 ping statistics --- 00:29:59.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.615 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=1889761 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 1889761 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1889761 ']' 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.615 [2024-05-15 16:51:06.503007] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:29:59.615 [2024-05-15 16:51:06.503081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.615 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.615 [2024-05-15 16:51:06.575840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.615 [2024-05-15 16:51:06.660961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.615 [2024-05-15 16:51:06.661029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.615 [2024-05-15 16:51:06.661043] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.615 [2024-05-15 16:51:06.661053] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.615 [2024-05-15 16:51:06.661063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.615 [2024-05-15 16:51:06.661148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.615 [2024-05-15 16:51:06.661212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.615 [2024-05-15 16:51:06.661277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.615 [2024-05-15 16:51:06.661281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.615 [2024-05-15 16:51:06.784775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.615 Malloc1 00:29:59.615 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.616 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:59.616 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.616 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.913 [2024-05-15 16:51:06.855608] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:59.913 [2024-05-15 16:51:06.855904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:59.913 16:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.913 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:59.913 fio-3.35 00:29:59.913 Starting 1 thread 00:30:00.192 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.717 00:30:02.717 test: (groupid=0, jobs=1): err= 0: pid=1890079: Wed May 15 16:51:09 2024 00:30:02.717 read: IOPS=7967, BW=31.1MiB/s (32.6MB/s)(62.5MiB/2007msec) 00:30:02.717 slat (nsec): min=1949, max=128278, avg=2606.08, stdev=1843.26 00:30:02.717 clat (usec): min=2848, max=14604, avg=8856.23, stdev=704.74 00:30:02.717 lat (usec): min=2871, max=14607, avg=8858.84, stdev=704.67 00:30:02.717 clat percentiles (usec): 00:30:02.717 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8291], 00:30:02.717 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:30:02.717 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[ 9896], 00:30:02.717 | 99.00th=[10421], 99.50th=[10683], 99.90th=[11469], 99.95th=[12649], 00:30:02.717 | 99.99th=[14615] 00:30:02.717 bw ( KiB/s): min=30544, max=32520, per=99.95%, avg=31854.00, stdev=918.86, samples=4 00:30:02.717 iops : min= 7636, max= 8130, avg=7963.50, stdev=229.72, samples=4 00:30:02.717 write: IOPS=7944, BW=31.0MiB/s (32.5MB/s)(62.3MiB/2007msec); 0 zone resets 00:30:02.717 slat (usec): min=2, max=127, avg= 2.74, stdev= 1.74 00:30:02.717 clat (usec): min=1184, max=14518, avg=7188.70, stdev=629.83 00:30:02.717 lat (usec): min=1191, max=14520, avg=7191.44, stdev=629.79 00:30:02.717 clat percentiles (usec): 00:30:02.717 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6718], 00:30:02.717 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:30:02.717 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8160], 00:30:02.717 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[11731], 99.95th=[12780], 00:30:02.717 | 99.99th=[14091] 00:30:02.718 bw ( KiB/s): min=31424, max=32128, per=99.95%, avg=31760.00, stdev=344.71, samples=4 00:30:02.718 iops : min= 7856, max= 8032, avg=7940.00, stdev=86.18, samples=4 00:30:02.718 lat (msec) : 2=0.01%, 4=0.11%, 10=97.61%, 20=2.27% 00:30:02.718 cpu : usr=57.53%, sys=38.14%, ctx=62, majf=0, minf=35 00:30:02.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:02.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:02.718 issued rwts: total=15990,15944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:02.718 00:30:02.718 Run status group 0 (all jobs): 00:30:02.718 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.5MiB (65.5MB), run=2007-2007msec 00:30:02.718 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.3MiB (65.3MB), run=2007-2007msec 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:02.718 16:51:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:02.718 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:02.718 fio-3.35 00:30:02.718 Starting 1 thread 00:30:02.718 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.245 [2024-05-15 16:51:12.106862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf54ac0 is same with the state(5) to be set 00:30:05.245 [2024-05-15 16:51:12.106933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf54ac0 is same with the state(5) to be set 00:30:05.245 00:30:05.245 test: (groupid=0, jobs=1): err= 0: pid=1890817: Wed May 15 16:51:12 2024 00:30:05.245 read: IOPS=8283, BW=129MiB/s (136MB/s)(260MiB/2007msec) 00:30:05.245 slat (usec): min=2, max=120, avg= 3.92, stdev= 2.15 00:30:05.245 clat (usec): min=1689, max=16931, avg=8939.18, stdev=2104.83 00:30:05.245 lat (usec): min=1692, max=16934, avg=8943.09, stdev=2104.89 00:30:05.245 clat percentiles (usec): 00:30:05.245 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7111], 00:30:05.245 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9372], 00:30:05.245 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11600], 95.00th=[12256], 00:30:05.245 | 99.00th=[14877], 99.50th=[15795], 99.90th=[16581], 99.95th=[16712], 00:30:05.245 | 99.99th=[16909] 00:30:05.245 bw ( KiB/s): min=59296, max=78944, per=52.37%, avg=69408.00, stdev=8270.09, samples=4 00:30:05.245 iops : min= 3706, max= 4934, avg=4338.00, stdev=516.88, samples=4 00:30:05.245 write: IOPS=4883, BW=76.3MiB/s (80.0MB/s)(141MiB/1851msec); 0 zone resets 00:30:05.245 slat (usec): min=30, max=148, avg=34.83, stdev= 6.25 00:30:05.245 clat (usec): min=5253, max=19129, avg=11179.46, stdev=2011.23 00:30:05.245 lat (usec): min=5287, max=19176, avg=11214.29, stdev=2011.20 00:30:05.245 clat percentiles (usec): 00:30:05.245 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:05.245 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:30:05.245 | 70.00th=[11994], 80.00th=[12649], 90.00th=[14091], 95.00th=[15139], 00:30:05.245 | 99.00th=[16450], 99.50th=[16712], 99.90th=[18220], 99.95th=[18744], 00:30:05.245 | 99.99th=[19006] 00:30:05.245 bw ( KiB/s): min=62720, max=79904, per=91.72%, avg=71672.00, stdev=7582.78, samples=4 00:30:05.245 iops : min= 3920, max= 4994, avg=4479.50, stdev=473.92, samples=4 00:30:05.245 lat (msec) : 2=0.02%, 4=0.13%, 10=54.97%, 20=44.88% 00:30:05.245 cpu : usr=77.02%, sys=20.44%, ctx=32, majf=0, minf=57 00:30:05.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:05.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:05.245 issued rwts: total=16624,9040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.246 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:05.246 00:30:05.246 Run status group 0 (all jobs): 00:30:05.246 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=260MiB (272MB), run=2007-2007msec 00:30:05.246 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=141MiB (148MB), run=1851-1851msec 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.246 16:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.825 Nvme0n1 00:30:07.825 16:51:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.825 16:51:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:07.825 16:51:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.825 16:51:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=91d2da57-593e-40db-8e89-0c8e7aa78a43 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 91d2da57-593e-40db-8e89-0c8e7aa78a43 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=91d2da57-593e-40db-8e89-0c8e7aa78a43 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:11.100 { 00:30:11.100 "uuid": "91d2da57-593e-40db-8e89-0c8e7aa78a43", 00:30:11.100 "name": "lvs_0", 00:30:11.100 "base_bdev": "Nvme0n1", 00:30:11.100 "total_data_clusters": 930, 00:30:11.100 "free_clusters": 930, 00:30:11.100 "block_size": 512, 00:30:11.100 "cluster_size": 1073741824 00:30:11.100 } 00:30:11.100 ]' 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="91d2da57-593e-40db-8e89-0c8e7aa78a43") .free_clusters' 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="91d2da57-593e-40db-8e89-0c8e7aa78a43") .cluster_size' 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:11.100 952320 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.100 e970d31c-293c-4803-8f2b-a892546db483 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.100 16:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:11.100 16:51:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.100 16:51:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.100 16:51:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.100 16:51:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:11.100 16:51:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.100 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:11.100 fio-3.35 00:30:11.100 Starting 1 thread 00:30:11.100 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.622 [2024-05-15 16:51:20.510694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110cfa0 is same with the state(5) to be set 00:30:13.622 [2024-05-15 16:51:20.510757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110cfa0 is same with the state(5) to be set 00:30:13.622 [2024-05-15 16:51:20.510792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110cfa0 is same with the state(5) to be set 00:30:13.622 00:30:13.622 test: (groupid=0, jobs=1): err= 0: pid=1891944: Wed May 15 16:51:20 2024 00:30:13.622 read: IOPS=5979, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2008msec) 00:30:13.622 slat (usec): min=2, max=129, avg= 2.65, stdev= 1.78 00:30:13.622 clat (usec): min=833, max=171331, avg=11783.27, stdev=11658.97 00:30:13.622 lat (usec): min=836, max=171365, avg=11785.92, stdev=11659.20 00:30:13.622 clat percentiles (msec): 00:30:13.622 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:13.622 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:13.622 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:13.622 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:13.622 | 99.99th=[ 171] 00:30:13.622 bw ( KiB/s): min=16798, max=26432, per=99.72%, avg=23851.50, stdev=4706.02, samples=4 00:30:13.622 iops : min= 4199, max= 6608, avg=5962.75, stdev=1176.75, samples=4 00:30:13.622 write: IOPS=5964, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2008msec); 0 zone resets 00:30:13.622 slat (usec): min=2, max=104, avg= 2.74, stdev= 1.43 00:30:13.622 clat (usec): min=309, max=169247, avg=9489.93, stdev=10944.57 00:30:13.622 lat (usec): min=312, max=169252, avg=9492.67, stdev=10944.81 00:30:13.622 clat percentiles (msec): 00:30:13.622 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:13.622 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:13.622 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:30:13.622 | 99.00th=[ 11], 99.50th=[ 18], 99.90th=[ 169], 99.95th=[ 169], 00:30:13.622 | 99.99th=[ 169] 00:30:13.622 bw ( KiB/s): min=17796, max=25888, per=99.94%, avg=23843.00, stdev=4031.42, samples=4 00:30:13.622 iops : min= 4449, max= 6472, avg=5960.75, stdev=1007.85, samples=4 00:30:13.622 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:13.622 lat (msec) : 2=0.02%, 4=0.14%, 10=55.09%, 20=44.19%, 250=0.53% 00:30:13.622 cpu : usr=59.49%, sys=37.17%, ctx=104, majf=0, minf=35 00:30:13.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:13.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.622 issued rwts: total=12007,11976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.622 00:30:13.622 Run status group 0 (all jobs): 00:30:13.622 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2008-2008msec 00:30:13.622 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.1MB), run=2008-2008msec 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.622 16:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=f088091e-776f-436e-ae67-688938c37dcd 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb f088091e-776f-436e-ae67-688938c37dcd 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=f088091e-776f-436e-ae67-688938c37dcd 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:14.554 { 00:30:14.554 "uuid": "91d2da57-593e-40db-8e89-0c8e7aa78a43", 00:30:14.554 "name": "lvs_0", 00:30:14.554 "base_bdev": "Nvme0n1", 00:30:14.554 "total_data_clusters": 930, 00:30:14.554 "free_clusters": 0, 00:30:14.554 "block_size": 512, 00:30:14.554 "cluster_size": 1073741824 00:30:14.554 }, 00:30:14.554 { 00:30:14.554 "uuid": "f088091e-776f-436e-ae67-688938c37dcd", 00:30:14.554 "name": "lvs_n_0", 00:30:14.554 "base_bdev": "e970d31c-293c-4803-8f2b-a892546db483", 00:30:14.554 "total_data_clusters": 237847, 00:30:14.554 "free_clusters": 237847, 00:30:14.554 "block_size": 512, 00:30:14.554 "cluster_size": 4194304 00:30:14.554 } 00:30:14.554 ]' 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="f088091e-776f-436e-ae67-688938c37dcd") .free_clusters' 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="f088091e-776f-436e-ae67-688938c37dcd") .cluster_size' 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:14.554 951388 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:14.554 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.555 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.811 55f28122-95e6-4918-8142-b823575a8658 00:30:14.811 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.811 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:14.811 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.811 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.811 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.811 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:14.812 16:51:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:14.812 16:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:15.069 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:15.069 fio-3.35 00:30:15.069 Starting 1 thread 00:30:15.069 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.593 00:30:17.593 test: (groupid=0, jobs=1): err= 0: pid=1892428: Wed May 15 16:51:24 2024 00:30:17.593 read: IOPS=5798, BW=22.6MiB/s (23.8MB/s)(45.5MiB/2009msec) 00:30:17.593 slat (usec): min=2, max=157, avg= 2.74, stdev= 2.18 00:30:17.593 clat (usec): min=4363, max=19872, avg=12202.43, stdev=1031.24 00:30:17.593 lat (usec): min=4368, max=19876, avg=12205.17, stdev=1031.12 00:30:17.593 clat percentiles (usec): 00:30:17.593 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:30:17.593 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:30:17.593 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:30:17.593 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17433], 99.95th=[18482], 00:30:17.593 | 99.99th=[18744] 00:30:17.593 bw ( KiB/s): min=21848, max=23784, per=99.81%, avg=23150.00, stdev=881.20, samples=4 00:30:17.593 iops : min= 5462, max= 5946, avg=5787.50, stdev=220.30, samples=4 00:30:17.593 write: IOPS=5780, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec); 0 zone resets 00:30:17.593 slat (usec): min=2, max=101, avg= 2.85, stdev= 1.83 00:30:17.593 clat (usec): min=2067, max=17083, avg=9749.35, stdev=886.04 00:30:17.593 lat (usec): min=2073, max=17086, avg=9752.20, stdev=885.97 00:30:17.593 clat percentiles (usec): 00:30:17.593 | 1.00th=[ 7701], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:17.593 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:30:17.593 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:17.593 | 99.00th=[11731], 99.50th=[11994], 99.90th=[15664], 99.95th=[16712], 00:30:17.593 | 99.99th=[16909] 00:30:17.593 bw ( KiB/s): min=22936, max=23344, per=99.99%, avg=23122.00, stdev=175.80, samples=4 00:30:17.593 iops : min= 5734, max= 5836, avg=5780.50, stdev=43.95, samples=4 00:30:17.593 lat (msec) : 4=0.05%, 10=31.90%, 20=68.06% 00:30:17.593 cpu : usr=58.32%, sys=38.40%, ctx=79, majf=0, minf=35 00:30:17.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:17.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.593 issued rwts: total=11649,11614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.593 00:30:17.593 Run status group 0 (all jobs): 00:30:17.593 READ: bw=22.6MiB/s (23.8MB/s), 22.6MiB/s-22.6MiB/s (23.8MB/s-23.8MB/s), io=45.5MiB (47.7MB), run=2009-2009msec 00:30:17.593 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:30:17.593 16:51:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:17.593 16:51:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.593 16:51:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.593 16:51:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.594 16:51:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:30:17.594 16:51:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:17.594 16:51:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.594 16:51:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.772 16:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.296 16:51:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:25.720 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:25.720 rmmod nvme_tcp 00:30:25.721 rmmod nvme_fabrics 00:30:25.721 rmmod nvme_keyring 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1889761 ']' 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1889761 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1889761 ']' 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1889761 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1889761 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1889761' 00:30:25.721 killing process with pid 1889761 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1889761 00:30:25.721 [2024-05-15 16:51:32.643978] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1889761 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.721 16:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.252 16:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.252 00:30:28.252 real 0m31.102s 00:30:28.252 user 1m50.946s 00:30:28.252 sys 0m6.371s 00:30:28.252 16:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:28.252 16:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.252 ************************************ 00:30:28.252 END TEST nvmf_fio_host 00:30:28.252 ************************************ 00:30:28.252 16:51:34 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:28.252 16:51:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:28.252 16:51:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.252 16:51:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.252 ************************************ 00:30:28.252 START TEST nvmf_failover 00:30:28.252 ************************************ 00:30:28.252 16:51:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:28.252 * Looking for test storage... 00:30:28.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.252 16:51:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:30.777 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:30.777 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:30.777 Found net devices under 0000:09:00.0: cvl_0_0 00:30:30.777 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:30.778 Found net devices under 0000:09:00.1: cvl_0_1 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:30.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:30:30.778 00:30:30.778 --- 10.0.0.2 ping statistics --- 00:30:30.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.778 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:30.778 00:30:30.778 --- 10.0.0.1 ping statistics --- 00:30:30.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.778 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1895905 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1895905 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1895905 ']' 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.778 [2024-05-15 16:51:37.671108] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:30:30.778 [2024-05-15 16:51:37.671200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.778 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.778 [2024-05-15 16:51:37.752270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:30.778 [2024-05-15 16:51:37.840506] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.778 [2024-05-15 16:51:37.840568] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.778 [2024-05-15 16:51:37.840594] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.778 [2024-05-15 16:51:37.840608] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.778 [2024-05-15 16:51:37.840620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.778 [2024-05-15 16:51:37.840719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.778 [2024-05-15 16:51:37.840813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.778 [2024-05-15 16:51:37.840816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.778 16:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:31.035 [2024-05-15 16:51:38.255446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.293 16:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:31.551 Malloc0 00:30:31.551 16:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.808 16:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.066 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.323 [2024-05-15 16:51:39.392727] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:32.323 [2024-05-15 16:51:39.393065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.323 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:32.580 [2024-05-15 16:51:39.645646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:32.580 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:32.837 [2024-05-15 16:51:39.906499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1896225 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1896225 /var/tmp/bdevperf.sock 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1896225 ']' 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:32.837 16:51:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.095 16:51:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:33.095 16:51:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:33.095 16:51:40 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.658 NVMe0n1 00:30:33.658 16:51:40 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.914 00:30:33.915 16:51:40 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1896311 00:30:33.915 16:51:40 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.915 16:51:40 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:34.846 16:51:41 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.114 16:51:42 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:38.396 16:51:45 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.654 00:30:38.654 16:51:45 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:38.914 [2024-05-15 16:51:45.925551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.925990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.914 [2024-05-15 16:51:45.926495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 [2024-05-15 16:51:45.926777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c9e00 is same with the state(5) to be set 00:30:38.915 16:51:45 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:42.197 16:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.197 [2024-05-15 16:51:49.174518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.197 16:51:49 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:43.130 16:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:43.389 [2024-05-15 16:51:50.471397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 [2024-05-15 16:51:50.471615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb340 is same with the state(5) to be set 00:30:43.389 16:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1896311 00:30:50.003 0 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1896225 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1896225 ']' 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1896225 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1896225 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1896225' 00:30:50.003 killing process with pid 1896225 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1896225 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1896225 00:30:50.003 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.003 [2024-05-15 16:51:39.968388] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:30:50.003 [2024-05-15 16:51:39.968481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896225 ] 00:30:50.003 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.003 [2024-05-15 16:51:40.039937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.003 [2024-05-15 16:51:40.125706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.003 Running I/O for 15 seconds... 00:30:50.003 [2024-05-15 16:51:42.236543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.236983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.237012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.237026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.237040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.237054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.237068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.003 [2024-05-15 16:51:42.237081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.003 [2024-05-15 16:51:42.237096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.237975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.237989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.004 [2024-05-15 16:51:42.238357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.004 [2024-05-15 16:51:42.238373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.005 [2024-05-15 16:51:42.238674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.238971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.238986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.005 [2024-05-15 16:51:42.239657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.005 [2024-05-15 16:51:42.239671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.239984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.239998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.240027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.240057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.240086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.240119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.006 [2024-05-15 16:51:42.240149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80472 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.006 [2024-05-15 16:51:42.240901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.006 [2024-05-15 16:51:42.240913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80480 len:8 PRP1 0x0 PRP2 0x0 00:30:50.006 [2024-05-15 16:51:42.240925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.006 [2024-05-15 16:51:42.240991] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ff4db0 was disconnected and freed. reset controller. 00:30:50.007 [2024-05-15 16:51:42.241020] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:50.007 [2024-05-15 16:51:42.241055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:42.241089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:42.241105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:42.241119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:42.241133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:42.241147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:42.241161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:42.241180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:42.241194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.007 [2024-05-15 16:51:42.241245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd5600 (9): Bad file descriptor 00:30:50.007 [2024-05-15 16:51:42.244576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.007 [2024-05-15 16:51:42.274299] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.007 [2024-05-15 16:51:45.926223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:45.926267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.926287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:45.926301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.926316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:45.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.926346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.007 [2024-05-15 16:51:45.926361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.926374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5600 is same with the state(5) to be set 00:30:50.007 [2024-05-15 16:51:45.928885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.928911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.928936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.928951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.928977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.928993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.007 [2024-05-15 16:51:45.929486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.007 [2024-05-15 16:51:45.929500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.008 [2024-05-15 16:51:45.929745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.929969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.929985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.008 [2024-05-15 16:51:45.930350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.008 [2024-05-15 16:51:45.930364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.009 [2024-05-15 16:51:45.930922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.930983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.930998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.009 [2024-05-15 16:51:45.931464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.009 [2024-05-15 16:51:45.931479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.010 [2024-05-15 16:51:45.931929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.931958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.931974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.931988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76448 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.010 [2024-05-15 16:51:45.932722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76456 len:8 PRP1 0x0 PRP2 0x0 00:30:50.010 [2024-05-15 16:51:45.932735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.010 [2024-05-15 16:51:45.932749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.010 [2024-05-15 16:51:45.932760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.932771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76464 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.932785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.932798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.932810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.932821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76472 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.932848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.932860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.932871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76480 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.932884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.932897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.932916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.932928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76488 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.932941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.932954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.932965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.932982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76496 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.932995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.933012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.933024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.933035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76504 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.933048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.933062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.933073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.933084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76512 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.933097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.947704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76520 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.947719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.947754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76528 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.947767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.947802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76536 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.947820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.947854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76544 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.947866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.947901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76552 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.947914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.947949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76560 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.947969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.947983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.947994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.948006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76568 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.948018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.948031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.948042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76576 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.948065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.948078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.948089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.948100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76584 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.948113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.948125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.011 [2024-05-15 16:51:45.948137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.011 [2024-05-15 16:51:45.948148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76592 len:8 PRP1 0x0 PRP2 0x0 00:30:50.011 [2024-05-15 16:51:45.948161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:45.948244] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ff6eb0 was disconnected and freed. reset controller. 00:30:50.011 [2024-05-15 16:51:45.948290] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:50.011 [2024-05-15 16:51:45.948305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.011 [2024-05-15 16:51:45.948360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd5600 (9): Bad file descriptor 00:30:50.011 [2024-05-15 16:51:45.951707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.011 [2024-05-15 16:51:45.989308] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.011 [2024-05-15 16:51:50.471983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-05-15 16:51:50.472347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.011 [2024-05-15 16:51:50.472362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.472574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.472985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.472999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-05-15 16:51:50.473286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.012 [2024-05-15 16:51:50.473656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-05-15 16:51:50.473670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.473977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.473990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-05-15 16:51:50.474283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-05-15 16:51:50.474719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.013 [2024-05-15 16:51:50.474734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.474747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.474776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.474971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.474985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-05-15 16:51:50.475818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.475976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.014 [2024-05-15 16:51:50.475989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-05-15 16:51:50.476020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.014 [2024-05-15 16:51:50.476035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.015 [2024-05-15 16:51:50.476047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128928 len:8 PRP1 0x0 PRP2 0x0 00:30:50.015 [2024-05-15 16:51:50.476059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.015 [2024-05-15 16:51:50.476125] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20043a0 was disconnected and freed. reset controller. 00:30:50.015 [2024-05-15 16:51:50.476147] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:50.015 [2024-05-15 16:51:50.476195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.015 [2024-05-15 16:51:50.476226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.015 [2024-05-15 16:51:50.476251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.015 [2024-05-15 16:51:50.476266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.015 [2024-05-15 16:51:50.476280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.015 [2024-05-15 16:51:50.476294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.015 [2024-05-15 16:51:50.476308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.015 [2024-05-15 16:51:50.476321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.015 [2024-05-15 16:51:50.476335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.015 [2024-05-15 16:51:50.479650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.015 [2024-05-15 16:51:50.479689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd5600 (9): Bad file descriptor 00:30:50.015 [2024-05-15 16:51:50.516009] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.015 00:30:50.015 Latency(us) 00:30:50.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.015 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:50.015 Verification LBA range: start 0x0 length 0x4000 00:30:50.015 NVMe0n1 : 15.01 8443.98 32.98 245.53 0.00 14700.38 843.47 29515.47 00:30:50.015 =================================================================================================================== 00:30:50.015 Total : 8443.98 32.98 245.53 0.00 14700.38 843.47 29515.47 00:30:50.015 Received shutdown signal, test time was about 15.000000 seconds 00:30:50.015 00:30:50.015 Latency(us) 00:30:50.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.015 =================================================================================================================== 00:30:50.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1898075 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1898075 /var/tmp/bdevperf.sock 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1898075 ']' 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:50.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:50.015 [2024-05-15 16:51:56.921393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:50.015 16:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:50.015 [2024-05-15 16:51:57.165982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:50.015 16:51:57 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.273 NVMe0n1 00:30:50.273 16:51:57 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.838 00:30:50.838 16:51:57 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.096 00:30:51.096 16:51:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.096 16:51:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:51.354 16:51:58 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.613 16:51:58 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:54.902 16:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.902 16:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:54.902 16:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1898744 00:30:54.902 16:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:54.902 16:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1898744 00:30:56.285 0 00:30:56.285 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:56.285 [2024-05-15 16:51:56.426010] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:30:56.285 [2024-05-15 16:51:56.426088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898075 ] 00:30:56.285 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.285 [2024-05-15 16:51:56.493048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.285 [2024-05-15 16:51:56.571923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.285 [2024-05-15 16:51:58.709182] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:56.285 [2024-05-15 16:51:58.709294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.285 [2024-05-15 16:51:58.709316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.285 [2024-05-15 16:51:58.709348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.285 [2024-05-15 16:51:58.709362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.285 [2024-05-15 16:51:58.709377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.285 [2024-05-15 16:51:58.709392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.285 [2024-05-15 16:51:58.709406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.285 [2024-05-15 16:51:58.709421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.285 [2024-05-15 16:51:58.709435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.285 [2024-05-15 16:51:58.709472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.285 [2024-05-15 16:51:58.709503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x784600 (9): Bad file descriptor 00:30:56.285 [2024-05-15 16:51:58.730274] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:56.285 Running I/O for 1 seconds... 00:30:56.285 00:30:56.285 Latency(us) 00:30:56.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.285 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:56.285 Verification LBA range: start 0x0 length 0x4000 00:30:56.285 NVMe0n1 : 1.01 8536.61 33.35 0.00 0.00 14935.66 3446.71 12718.84 00:30:56.285 =================================================================================================================== 00:30:56.285 Total : 8536.61 33.35 0.00 0.00 14935.66 3446.71 12718.84 00:30:56.285 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:56.285 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:56.285 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.543 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:56.543 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:56.800 16:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.059 16:52:04 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1898075 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1898075 ']' 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1898075 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1898075 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1898075' 00:31:00.351 killing process with pid 1898075 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1898075 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1898075 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:00.351 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.609 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:00.609 rmmod nvme_tcp 00:31:00.609 rmmod nvme_fabrics 00:31:00.868 rmmod nvme_keyring 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1895905 ']' 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1895905 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1895905 ']' 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1895905 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1895905 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1895905' 00:31:00.868 killing process with pid 1895905 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1895905 00:31:00.868 [2024-05-15 16:52:07.879229] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:00.868 16:52:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1895905 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.128 16:52:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.038 16:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:03.038 00:31:03.038 real 0m35.203s 00:31:03.038 user 2m2.637s 00:31:03.038 sys 0m6.007s 00:31:03.038 16:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:03.038 16:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:03.038 ************************************ 00:31:03.038 END TEST nvmf_failover 00:31:03.038 ************************************ 00:31:03.038 16:52:10 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:03.038 16:52:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:03.038 16:52:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:03.038 16:52:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:03.038 ************************************ 00:31:03.038 START TEST nvmf_host_discovery 00:31:03.038 ************************************ 00:31:03.038 16:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:03.297 * Looking for test storage... 00:31:03.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.297 16:52:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:03.298 16:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:05.845 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:05.845 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:05.845 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:05.846 Found net devices under 0000:09:00.0: cvl_0_0 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:05.846 Found net devices under 0000:09:00.1: cvl_0_1 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:05.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:31:05.846 00:31:05.846 --- 10.0.0.2 ping statistics --- 00:31:05.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.846 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:31:05.846 00:31:05.846 --- 10.0.0.1 ping statistics --- 00:31:05.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.846 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1901751 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1901751 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1901751 ']' 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:05.846 16:52:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.846 [2024-05-15 16:52:12.967754] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:31:05.846 [2024-05-15 16:52:12.967851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.846 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.846 [2024-05-15 16:52:13.042590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.104 [2024-05-15 16:52:13.135624] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.104 [2024-05-15 16:52:13.135681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.104 [2024-05-15 16:52:13.135704] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.104 [2024-05-15 16:52:13.135716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.104 [2024-05-15 16:52:13.135740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.104 [2024-05-15 16:52:13.135776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.104 [2024-05-15 16:52:13.280927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.104 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.104 [2024-05-15 16:52:13.288887] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:06.105 [2024-05-15 16:52:13.289153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.105 null0 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.105 null1 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1901781 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1901781 /tmp/host.sock 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1901781 ']' 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:06.105 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:06.105 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.362 [2024-05-15 16:52:13.361528] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:31:06.362 [2024-05-15 16:52:13.361632] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901781 ] 00:31:06.362 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.362 [2024-05-15 16:52:13.428210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.362 [2024-05-15 16:52:13.512988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.620 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.621 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 [2024-05-15 16:52:13.918778] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.879 16:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:06.879 16:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:07.445 [2024-05-15 16:52:14.648430] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:07.445 [2024-05-15 16:52:14.648459] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:07.445 [2024-05-15 16:52:14.648479] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:07.702 [2024-05-15 16:52:14.734807] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:07.702 [2024-05-15 16:52:14.838857] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:07.702 [2024-05-15 16:52:14.838883] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.960 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.219 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.220 16:52:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.514 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.514 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.514 16:52:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 [2024-05-15 16:52:16.554976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:09.445 [2024-05-15 16:52:16.555841] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:09.445 [2024-05-15 16:52:16.555886] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.445 [2024-05-15 16:52:16.644106] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:09.445 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.702 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:09.702 16:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:09.702 [2024-05-15 16:52:16.704680] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.702 [2024-05-15 16:52:16.704706] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:09.702 [2024-05-15 16:52:16.704717] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:10.631 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.631 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:10.631 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:10.631 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.632 [2024-05-15 16:52:17.795261] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:10.632 [2024-05-15 16:52:17.795312] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.632 [2024-05-15 16:52:17.798809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.632 [2024-05-15 16:52:17.798849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.632 [2024-05-15 16:52:17.798868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.632 [2024-05-15 16:52:17.798885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.632 [2024-05-15 16:52:17.798900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.632 [2024-05-15 16:52:17.798922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.632 [2024-05-15 16:52:17.798939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.632 [2024-05-15 16:52:17.798955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.632 [2024-05-15 16:52:17.798980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.632 [2024-05-15 16:52:17.808815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.632 [2024-05-15 16:52:17.818862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.632 [2024-05-15 16:52:17.819136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.819335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.819363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.632 [2024-05-15 16:52:17.819381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.632 [2024-05-15 16:52:17.819403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.632 [2024-05-15 16:52:17.819438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.632 [2024-05-15 16:52:17.819456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.632 [2024-05-15 16:52:17.819472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.632 [2024-05-15 16:52:17.819492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.632 [2024-05-15 16:52:17.828947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.632 [2024-05-15 16:52:17.829135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.829290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.829317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.632 [2024-05-15 16:52:17.829334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.632 [2024-05-15 16:52:17.829355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.632 [2024-05-15 16:52:17.829376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.632 [2024-05-15 16:52:17.829390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.632 [2024-05-15 16:52:17.829404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.632 [2024-05-15 16:52:17.829422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.632 [2024-05-15 16:52:17.839024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.632 [2024-05-15 16:52:17.839230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.839371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.839397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.632 [2024-05-15 16:52:17.839423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.632 [2024-05-15 16:52:17.839446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.632 [2024-05-15 16:52:17.839467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.632 [2024-05-15 16:52:17.839481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.632 [2024-05-15 16:52:17.839510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.632 [2024-05-15 16:52:17.839532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.632 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.632 [2024-05-15 16:52:17.849103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.632 [2024-05-15 16:52:17.849289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.849447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.632 [2024-05-15 16:52:17.849474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.632 [2024-05-15 16:52:17.849491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.632 [2024-05-15 16:52:17.849514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.633 [2024-05-15 16:52:17.850341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.633 [2024-05-15 16:52:17.850366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.633 [2024-05-15 16:52:17.850380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.633 [2024-05-15 16:52:17.850412] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-05-15 16:52:17.859179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.890 [2024-05-15 16:52:17.859410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-05-15 16:52:17.859554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-05-15 16:52:17.859585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-05-15 16:52:17.859604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.890 [2024-05-15 16:52:17.859636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.890 [2024-05-15 16:52:17.859675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.890 [2024-05-15 16:52:17.859696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.890 [2024-05-15 16:52:17.859711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.890 [2024-05-15 16:52:17.859733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-05-15 16:52:17.869287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.890 [2024-05-15 16:52:17.869467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-05-15 16:52:17.869668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-05-15 16:52:17.869698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-05-15 16:52:17.869717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.890 [2024-05-15 16:52:17.869742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.890 [2024-05-15 16:52:17.869792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.890 [2024-05-15 16:52:17.869814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.890 [2024-05-15 16:52:17.869830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.890 [2024-05-15 16:52:17.869851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.890 [2024-05-15 16:52:17.879361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.890 [2024-05-15 16:52:17.879536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-05-15 16:52:17.879724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-05-15 16:52:17.879765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60bc60 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-05-15 16:52:17.879781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bc60 is same with the state(5) to be set 00:31:10.890 [2024-05-15 16:52:17.879803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60bc60 (9): Bad file descriptor 00:31:10.890 [2024-05-15 16:52:17.879849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.890 [2024-05-15 16:52:17.879869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.890 [2024-05-15 16:52:17.879882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.890 [2024-05-15 16:52:17.879916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-05-15 16:52:17.881841] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:10.890 [2024-05-15 16:52:17.881872] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.890 16:52:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.890 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.891 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.147 16:52:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.080 [2024-05-15 16:52:19.191062] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:12.080 [2024-05-15 16:52:19.191102] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:12.080 [2024-05-15 16:52:19.191127] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:12.080 [2024-05-15 16:52:19.278389] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:12.339 [2024-05-15 16:52:19.344809] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:12.339 [2024-05-15 16:52:19.344860] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.339 request: 00:31:12.339 { 00:31:12.339 "name": "nvme", 00:31:12.339 "trtype": "tcp", 00:31:12.339 "traddr": "10.0.0.2", 00:31:12.339 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:12.339 "adrfam": "ipv4", 00:31:12.339 "trsvcid": "8009", 00:31:12.339 "wait_for_attach": true, 00:31:12.339 "method": "bdev_nvme_start_discovery", 00:31:12.339 "req_id": 1 00:31:12.339 } 00:31:12.339 Got JSON-RPC error response 00:31:12.339 response: 00:31:12.339 { 00:31:12.339 "code": -17, 00:31:12.339 "message": "File exists" 00:31:12.339 } 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.339 request: 00:31:12.339 { 00:31:12.339 "name": "nvme_second", 00:31:12.339 "trtype": "tcp", 00:31:12.339 "traddr": "10.0.0.2", 00:31:12.339 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:12.339 "adrfam": "ipv4", 00:31:12.339 "trsvcid": "8009", 00:31:12.339 "wait_for_attach": true, 00:31:12.339 "method": "bdev_nvme_start_discovery", 00:31:12.339 "req_id": 1 00:31:12.339 } 00:31:12.339 Got JSON-RPC error response 00:31:12.339 response: 00:31:12.339 { 00:31:12.339 "code": -17, 00:31:12.339 "message": "File exists" 00:31:12.339 } 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.339 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.340 16:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.714 [2024-05-15 16:52:20.556478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.714 [2024-05-15 16:52:20.556753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.714 [2024-05-15 16:52:20.556781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60a0b0 with addr=10.0.0.2, port=8010 00:31:13.714 [2024-05-15 16:52:20.556812] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:13.714 [2024-05-15 16:52:20.556828] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:13.714 [2024-05-15 16:52:20.556842] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:14.647 [2024-05-15 16:52:21.558840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 16:52:21.559061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 16:52:21.559092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60a0b0 with addr=10.0.0.2, port=8010 00:31:14.647 [2024-05-15 16:52:21.559120] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:14.647 [2024-05-15 16:52:21.559136] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:14.647 [2024-05-15 16:52:21.559150] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:15.580 [2024-05-15 16:52:22.561052] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:15.580 request: 00:31:15.580 { 00:31:15.580 "name": "nvme_second", 00:31:15.580 "trtype": "tcp", 00:31:15.580 "traddr": "10.0.0.2", 00:31:15.580 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:15.580 "adrfam": "ipv4", 00:31:15.580 "trsvcid": "8010", 00:31:15.580 "attach_timeout_ms": 3000, 00:31:15.580 "method": "bdev_nvme_start_discovery", 00:31:15.580 "req_id": 1 00:31:15.580 } 00:31:15.580 Got JSON-RPC error response 00:31:15.580 response: 00:31:15.580 { 00:31:15.580 "code": -110, 00:31:15.580 "message": "Connection timed out" 00:31:15.580 } 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1901781 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:15.580 rmmod nvme_tcp 00:31:15.580 rmmod nvme_fabrics 00:31:15.580 rmmod nvme_keyring 00:31:15.580 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1901751 ']' 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1901751 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1901751 ']' 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1901751 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1901751 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1901751' 00:31:15.581 killing process with pid 1901751 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1901751 00:31:15.581 [2024-05-15 16:52:22.703604] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:15.581 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1901751 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.839 16:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.365 16:52:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:18.365 00:31:18.365 real 0m14.776s 00:31:18.365 user 0m21.339s 00:31:18.365 sys 0m3.282s 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.365 ************************************ 00:31:18.365 END TEST nvmf_host_discovery 00:31:18.365 ************************************ 00:31:18.365 16:52:25 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:18.365 16:52:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:18.365 16:52:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:18.365 16:52:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:18.365 ************************************ 00:31:18.365 START TEST nvmf_host_multipath_status 00:31:18.365 ************************************ 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:18.365 * Looking for test storage... 00:31:18.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:18.365 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:18.366 16:52:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:20.264 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:20.264 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.264 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:20.265 Found net devices under 0000:09:00.0: cvl_0_0 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:20.265 Found net devices under 0000:09:00.1: cvl_0_1 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.265 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:20.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:31:20.523 00:31:20.523 --- 10.0.0.2 ping statistics --- 00:31:20.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.523 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:31:20.523 00:31:20.523 --- 10.0.0.1 ping statistics --- 00:31:20.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.523 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1905350 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1905350 00:31:20.523 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1905350 ']' 00:31:20.524 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.524 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:20.524 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.524 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:20.524 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.524 [2024-05-15 16:52:27.697349] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:31:20.524 [2024-05-15 16:52:27.697429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.524 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.782 [2024-05-15 16:52:27.772417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:20.782 [2024-05-15 16:52:27.858422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.782 [2024-05-15 16:52:27.858484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.782 [2024-05-15 16:52:27.858512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.782 [2024-05-15 16:52:27.858523] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.782 [2024-05-15 16:52:27.858533] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.782 [2024-05-15 16:52:27.858618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.782 [2024-05-15 16:52:27.858623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.782 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:20.782 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:20.782 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:20.782 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.782 16:52:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.782 16:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.782 16:52:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1905350 00:31:20.782 16:52:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.348 [2024-05-15 16:52:28.274438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.348 16:52:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:21.348 Malloc0 00:31:21.348 16:52:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:21.605 16:52:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.171 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.171 [2024-05-15 16:52:29.329781] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:22.171 [2024-05-15 16:52:29.330111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.171 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:22.428 [2024-05-15 16:52:29.574707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1905518 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1905518 /var/tmp/bdevperf.sock 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1905518 ']' 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:22.428 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:22.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:22.429 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:22.429 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:22.686 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:22.686 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:22.686 16:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:22.943 16:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:23.508 Nvme0n1 00:31:23.508 16:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:24.073 Nvme0n1 00:31:24.073 16:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:24.073 16:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:25.968 16:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:25.968 16:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:26.225 16:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.483 16:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:27.445 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:27.445 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:27.445 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.445 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.702 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.702 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:27.702 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.702 16:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:27.960 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.960 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:27.960 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.960 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.217 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.217 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.217 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.217 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.475 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.475 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.475 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.475 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.733 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.733 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.733 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.733 16:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.993 16:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.993 16:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:28.993 16:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.251 16:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:29.509 16:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:30.442 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:30.442 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:30.442 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.442 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.700 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.700 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.700 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.700 16:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.958 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.958 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.958 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.958 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.215 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.215 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.215 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.215 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.473 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.473 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.473 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.473 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.730 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.730 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.730 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.730 16:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.988 16:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.988 16:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:31.988 16:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.245 16:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:32.503 16:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:33.436 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:33.436 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.436 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.436 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.693 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.693 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:33.693 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.693 16:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.951 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.951 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.951 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.951 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.208 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.208 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.208 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.208 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.466 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.466 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.466 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.466 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.724 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.724 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:34.724 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.724 16:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.981 16:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.981 16:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:34.981 16:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:35.239 16:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:35.496 16:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.867 16:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.125 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.125 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.125 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.125 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:37.383 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.383 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:37.383 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.383 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:37.640 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.640 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:37.640 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.640 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:37.897 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.897 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:37.897 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.897 16:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.154 16:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.154 16:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:38.154 16:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:38.411 16:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:38.411 16:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.781 16:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.038 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.038 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.038 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.038 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:40.295 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.295 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:40.295 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.295 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:40.553 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.553 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:40.553 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.553 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:40.811 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.811 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:40.811 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.811 16:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.069 16:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.069 16:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:41.069 16:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:41.326 16:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:41.584 16:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:42.515 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:42.515 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:42.515 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.515 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:42.773 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.773 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:42.773 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.773 16:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.068 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.068 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.068 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.068 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:43.326 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.326 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:43.326 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.326 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:43.583 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.583 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:43.583 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.583 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:43.841 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.841 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:43.841 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.841 16:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.099 16:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.099 16:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:44.357 16:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:44.357 16:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:44.614 16:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:44.871 16:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:45.805 16:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:45.805 16:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:45.805 16:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.805 16:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.063 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.063 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:46.063 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.063 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.320 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.320 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.321 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.321 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:46.577 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.577 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:46.577 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.578 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.834 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.834 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:46.834 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.834 16:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.092 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.092 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:47.092 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.092 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:47.349 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.349 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:47.349 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:47.607 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:47.865 16:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:48.796 16:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:48.796 16:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:48.796 16:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.796 16:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.054 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.054 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:49.054 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.054 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.312 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.312 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.312 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.312 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:49.570 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.570 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:49.570 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.570 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:49.828 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.828 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:49.828 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.828 16:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.086 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.086 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.086 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.086 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.343 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.343 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:50.343 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:50.600 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:50.857 16:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:51.792 16:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:51.792 16:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.792 16:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.792 16:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.050 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.050 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:52.050 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.050 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.308 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.308 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.308 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.308 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.566 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.566 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.566 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.566 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.824 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.824 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.824 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.824 16:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.081 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.081 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.081 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.081 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.339 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.340 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:53.340 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:53.597 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:53.855 16:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:54.803 16:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:54.803 16:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:54.803 16:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.803 16:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.061 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.061 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:55.061 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.061 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.318 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.318 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.318 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.318 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.576 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.576 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.576 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.576 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.833 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.833 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.833 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.834 16:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:56.091 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.091 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:56.091 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.091 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1905518 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1905518 ']' 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1905518 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1905518 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1905518' 00:31:56.348 killing process with pid 1905518 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1905518 00:31:56.348 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1905518 00:31:56.609 Connection closed with partial response: 00:31:56.609 00:31:56.609 00:31:56.609 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1905518 00:31:56.609 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:56.609 [2024-05-15 16:52:29.634810] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:31:56.609 [2024-05-15 16:52:29.634889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905518 ] 00:31:56.609 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.609 [2024-05-15 16:52:29.703329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.609 [2024-05-15 16:52:29.784455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.609 Running I/O for 90 seconds... 00:31:56.609 [2024-05-15 16:52:45.379657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.609 [2024-05-15 16:52:45.379724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.379805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.379827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.379852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.379869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.379891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.379908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.379930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.379946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.379968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.379986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.380956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.609 [2024-05-15 16:52:45.381807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.609 [2024-05-15 16:52:45.381831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.381849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.381874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.381892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.381916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.381934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.381958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.381977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.382981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.382998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.383974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.383995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.610 [2024-05-15 16:52:45.384754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.610 [2024-05-15 16:52:45.384799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.384971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.384997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.610 [2024-05-15 16:52:45.385805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.610 [2024-05-15 16:52:45.385831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.385848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.385874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.385890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.385921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.385964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.385981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.386006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.386024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.386049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.386067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.386097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.386114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.386157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:52:45.386183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:52:45.386233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.968968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.969044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.969124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.969147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.969852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.969891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.969918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.969946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.969967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.969983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.970470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.970974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.970995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.971963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.971980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.611 [2024-05-15 16:53:00.972611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.611 [2024-05-15 16:53:00.972758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.611 [2024-05-15 16:53:00.972775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.611 Received shutdown signal, test time was about 32.234473 seconds 00:31:56.611 00:31:56.611 Latency(us) 00:31:56.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.611 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:56.611 Verification LBA range: start 0x0 length 0x4000 00:31:56.611 Nvme0n1 : 32.23 7938.56 31.01 0.00 0.00 16096.82 916.29 4026531.84 00:31:56.611 =================================================================================================================== 00:31:56.611 Total : 7938.56 31.01 0.00 0.00 16096.82 916.29 4026531.84 00:31:56.611 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.869 16:53:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.869 rmmod nvme_tcp 00:31:56.869 rmmod nvme_fabrics 00:31:56.869 rmmod nvme_keyring 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1905350 ']' 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1905350 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1905350 ']' 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1905350 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1905350 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1905350' 00:31:56.869 killing process with pid 1905350 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1905350 00:31:56.869 [2024-05-15 16:53:04.094592] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:56.869 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1905350 00:31:57.126 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:57.126 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.126 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.126 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.126 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.126 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.127 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.127 16:53:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.690 16:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:59.690 00:31:59.690 real 0m41.330s 00:31:59.690 user 2m3.863s 00:31:59.690 sys 0m10.399s 00:31:59.690 16:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:59.690 16:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:59.690 ************************************ 00:31:59.690 END TEST nvmf_host_multipath_status 00:31:59.690 ************************************ 00:31:59.690 16:53:06 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:59.690 16:53:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:59.690 16:53:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:59.690 16:53:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.690 ************************************ 00:31:59.690 START TEST nvmf_discovery_remove_ifc 00:31:59.690 ************************************ 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:59.690 * Looking for test storage... 00:31:59.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:59.690 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:59.691 16:53:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:02.219 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:02.219 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:02.219 Found net devices under 0000:09:00.0: cvl_0_0 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:02.219 Found net devices under 0000:09:00.1: cvl_0_1 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.219 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:02.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:32:02.219 00:32:02.219 --- 10.0.0.2 ping statistics --- 00:32:02.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.220 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:02.220 00:32:02.220 --- 10.0.0.1 ping statistics --- 00:32:02.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.220 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1912013 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1912013 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1912013 ']' 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:02.220 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 [2024-05-15 16:53:09.237415] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:32:02.220 [2024-05-15 16:53:09.237505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.220 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.220 [2024-05-15 16:53:09.316365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.220 [2024-05-15 16:53:09.403804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.220 [2024-05-15 16:53:09.403864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.220 [2024-05-15 16:53:09.403888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.220 [2024-05-15 16:53:09.403903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.220 [2024-05-15 16:53:09.403915] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.220 [2024-05-15 16:53:09.403958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.478 [2024-05-15 16:53:09.558028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.478 [2024-05-15 16:53:09.565986] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:02.478 [2024-05-15 16:53:09.566301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:02.478 null0 00:32:02.478 [2024-05-15 16:53:09.598170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1912148 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1912148 /tmp/host.sock 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1912148 ']' 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:02.478 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:02.478 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.478 [2024-05-15 16:53:09.662270] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:32:02.478 [2024-05-15 16:53:09.662340] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912148 ] 00:32:02.478 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.735 [2024-05-15 16:53:09.733619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.735 [2024-05-15 16:53:09.821396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.735 16:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.993 16:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.993 16:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:02.993 16:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.993 16:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.929 [2024-05-15 16:53:11.077405] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:03.929 [2024-05-15 16:53:11.077439] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:03.929 [2024-05-15 16:53:11.077460] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:04.186 [2024-05-15 16:53:11.164775] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:04.186 [2024-05-15 16:53:11.387943] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:04.186 [2024-05-15 16:53:11.388012] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:04.186 [2024-05-15 16:53:11.388051] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:04.186 [2024-05-15 16:53:11.388073] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:04.186 [2024-05-15 16:53:11.388107] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.186 [2024-05-15 16:53:11.394642] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x182b7c0 was disconnected and freed. delete nvme_qpair. 00:32:04.186 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.443 16:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.375 16:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.748 16:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.682 16:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.615 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.615 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.616 16:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.550 16:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.808 [2024-05-15 16:53:16.829169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:09.808 [2024-05-15 16:53:16.829266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.808 [2024-05-15 16:53:16.829288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.808 [2024-05-15 16:53:16.829304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.808 [2024-05-15 16:53:16.829317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.808 [2024-05-15 16:53:16.829330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.808 [2024-05-15 16:53:16.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.808 [2024-05-15 16:53:16.829357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.808 [2024-05-15 16:53:16.829369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.808 [2024-05-15 16:53:16.829383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.808 [2024-05-15 16:53:16.829397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.808 [2024-05-15 16:53:16.829410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2850 is same with the state(5) to be set 00:32:09.808 [2024-05-15 16:53:16.839188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2850 (9): Bad file descriptor 00:32:09.808 [2024-05-15 16:53:16.849241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.740 16:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.740 [2024-05-15 16:53:17.857258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:11.712 [2024-05-15 16:53:18.881251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:11.712 [2024-05-15 16:53:18.881300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2850 with addr=10.0.0.2, port=4420 00:32:11.712 [2024-05-15 16:53:18.881325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2850 is same with the state(5) to be set 00:32:11.712 [2024-05-15 16:53:18.881830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2850 (9): Bad file descriptor 00:32:11.713 [2024-05-15 16:53:18.881877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.713 [2024-05-15 16:53:18.881916] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:11.713 [2024-05-15 16:53:18.881956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.713 [2024-05-15 16:53:18.881989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.713 [2024-05-15 16:53:18.882009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.713 [2024-05-15 16:53:18.882025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.713 [2024-05-15 16:53:18.882040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.713 [2024-05-15 16:53:18.882055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.713 [2024-05-15 16:53:18.882070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.713 [2024-05-15 16:53:18.882084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.713 [2024-05-15 16:53:18.882100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.713 [2024-05-15 16:53:18.882115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.713 [2024-05-15 16:53:18.882130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:11.713 [2024-05-15 16:53:18.882372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1ca0 (9): Bad file descriptor 00:32:11.713 [2024-05-15 16:53:18.883389] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:11.713 [2024-05-15 16:53:18.883409] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:11.713 16:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.713 16:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:11.713 16:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.081 16:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.081 16:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:13.081 16:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.012 [2024-05-15 16:53:20.942414] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:14.012 [2024-05-15 16:53:20.942444] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:14.012 [2024-05-15 16:53:20.942464] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.012 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.012 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.012 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.012 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.013 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.013 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.013 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.013 [2024-05-15 16:53:21.028775] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:14.013 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.013 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:14.013 16:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.013 [2024-05-15 16:53:21.213165] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:14.013 [2024-05-15 16:53:21.213225] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:14.013 [2024-05-15 16:53:21.213262] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:14.013 [2024-05-15 16:53:21.213298] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:14.013 [2024-05-15 16:53:21.213310] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:14.013 [2024-05-15 16:53:21.220601] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x180ce60 was disconnected and freed. delete nvme_qpair. 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1912148 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1912148 ']' 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1912148 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1912148 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1912148' 00:32:14.945 killing process with pid 1912148 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1912148 00:32:14.945 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1912148 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:15.203 rmmod nvme_tcp 00:32:15.203 rmmod nvme_fabrics 00:32:15.203 rmmod nvme_keyring 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1912013 ']' 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1912013 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1912013 ']' 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1912013 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:15.203 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1912013 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1912013' 00:32:15.462 killing process with pid 1912013 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1912013 00:32:15.462 [2024-05-15 16:53:22.432812] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1912013 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:15.462 16:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.993 16:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:17.993 00:32:17.993 real 0m18.275s 00:32:17.993 user 0m24.878s 00:32:17.993 sys 0m3.419s 00:32:17.993 16:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:17.993 16:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.993 ************************************ 00:32:17.993 END TEST nvmf_discovery_remove_ifc 00:32:17.993 ************************************ 00:32:17.993 16:53:24 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:17.993 16:53:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:17.993 16:53:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:17.993 16:53:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:17.993 ************************************ 00:32:17.993 START TEST nvmf_identify_kernel_target 00:32:17.993 ************************************ 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:17.993 * Looking for test storage... 00:32:17.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:17.993 16:53:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:20.530 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:20.530 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:20.530 Found net devices under 0000:09:00.0: cvl_0_0 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:20.530 Found net devices under 0000:09:00.1: cvl_0_1 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.530 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:32:20.531 00:32:20.531 --- 10.0.0.2 ping statistics --- 00:32:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.531 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:32:20.531 00:32:20.531 --- 10.0.0.1 ping statistics --- 00:32:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.531 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:20.531 16:53:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:21.906 Waiting for block devices as requested 00:32:21.906 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:21.906 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:21.906 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:21.906 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:21.906 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:21.906 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.164 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.164 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.164 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:22.423 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:22.423 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:22.423 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.423 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.680 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.680 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.680 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.680 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:22.939 16:53:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:22.939 No valid GPT data, bailing 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:22.939 00:32:22.939 Discovery Log Number of Records 2, Generation counter 2 00:32:22.939 =====Discovery Log Entry 0====== 00:32:22.939 trtype: tcp 00:32:22.939 adrfam: ipv4 00:32:22.939 subtype: current discovery subsystem 00:32:22.939 treq: not specified, sq flow control disable supported 00:32:22.939 portid: 1 00:32:22.939 trsvcid: 4420 00:32:22.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:22.939 traddr: 10.0.0.1 00:32:22.939 eflags: none 00:32:22.939 sectype: none 00:32:22.939 =====Discovery Log Entry 1====== 00:32:22.939 trtype: tcp 00:32:22.939 adrfam: ipv4 00:32:22.939 subtype: nvme subsystem 00:32:22.939 treq: not specified, sq flow control disable supported 00:32:22.939 portid: 1 00:32:22.939 trsvcid: 4420 00:32:22.939 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:22.939 traddr: 10.0.0.1 00:32:22.939 eflags: none 00:32:22.939 sectype: none 00:32:22.939 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:22.939 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:22.939 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.939 ===================================================== 00:32:22.939 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:22.939 ===================================================== 00:32:22.939 Controller Capabilities/Features 00:32:22.939 ================================ 00:32:22.939 Vendor ID: 0000 00:32:22.939 Subsystem Vendor ID: 0000 00:32:22.939 Serial Number: 656dac8d8a7e557627ec 00:32:22.939 Model Number: Linux 00:32:22.939 Firmware Version: 6.7.0-68 00:32:22.939 Recommended Arb Burst: 0 00:32:22.939 IEEE OUI Identifier: 00 00 00 00:32:22.939 Multi-path I/O 00:32:22.939 May have multiple subsystem ports: No 00:32:22.939 May have multiple controllers: No 00:32:22.939 Associated with SR-IOV VF: No 00:32:22.939 Max Data Transfer Size: Unlimited 00:32:22.939 Max Number of Namespaces: 0 00:32:22.939 Max Number of I/O Queues: 1024 00:32:22.939 NVMe Specification Version (VS): 1.3 00:32:22.939 NVMe Specification Version (Identify): 1.3 00:32:22.939 Maximum Queue Entries: 1024 00:32:22.939 Contiguous Queues Required: No 00:32:22.939 Arbitration Mechanisms Supported 00:32:22.939 Weighted Round Robin: Not Supported 00:32:22.939 Vendor Specific: Not Supported 00:32:22.939 Reset Timeout: 7500 ms 00:32:22.939 Doorbell Stride: 4 bytes 00:32:22.939 NVM Subsystem Reset: Not Supported 00:32:22.939 Command Sets Supported 00:32:22.940 NVM Command Set: Supported 00:32:22.940 Boot Partition: Not Supported 00:32:22.940 Memory Page Size Minimum: 4096 bytes 00:32:22.940 Memory Page Size Maximum: 4096 bytes 00:32:22.940 Persistent Memory Region: Not Supported 00:32:22.940 Optional Asynchronous Events Supported 00:32:22.940 Namespace Attribute Notices: Not Supported 00:32:22.940 Firmware Activation Notices: Not Supported 00:32:22.940 ANA Change Notices: Not Supported 00:32:22.940 PLE Aggregate Log Change Notices: Not Supported 00:32:22.940 LBA Status Info Alert Notices: Not Supported 00:32:22.940 EGE Aggregate Log Change Notices: Not Supported 00:32:22.940 Normal NVM Subsystem Shutdown event: Not Supported 00:32:22.940 Zone Descriptor Change Notices: Not Supported 00:32:22.940 Discovery Log Change Notices: Supported 00:32:22.940 Controller Attributes 00:32:22.940 128-bit Host Identifier: Not Supported 00:32:22.940 Non-Operational Permissive Mode: Not Supported 00:32:22.940 NVM Sets: Not Supported 00:32:22.940 Read Recovery Levels: Not Supported 00:32:22.940 Endurance Groups: Not Supported 00:32:22.940 Predictable Latency Mode: Not Supported 00:32:22.940 Traffic Based Keep ALive: Not Supported 00:32:22.940 Namespace Granularity: Not Supported 00:32:22.940 SQ Associations: Not Supported 00:32:22.940 UUID List: Not Supported 00:32:22.940 Multi-Domain Subsystem: Not Supported 00:32:22.940 Fixed Capacity Management: Not Supported 00:32:22.940 Variable Capacity Management: Not Supported 00:32:22.940 Delete Endurance Group: Not Supported 00:32:22.940 Delete NVM Set: Not Supported 00:32:22.940 Extended LBA Formats Supported: Not Supported 00:32:22.940 Flexible Data Placement Supported: Not Supported 00:32:22.940 00:32:22.940 Controller Memory Buffer Support 00:32:22.940 ================================ 00:32:22.940 Supported: No 00:32:22.940 00:32:22.940 Persistent Memory Region Support 00:32:22.940 ================================ 00:32:22.940 Supported: No 00:32:22.940 00:32:22.940 Admin Command Set Attributes 00:32:22.940 ============================ 00:32:22.940 Security Send/Receive: Not Supported 00:32:22.940 Format NVM: Not Supported 00:32:22.940 Firmware Activate/Download: Not Supported 00:32:22.940 Namespace Management: Not Supported 00:32:22.940 Device Self-Test: Not Supported 00:32:22.940 Directives: Not Supported 00:32:22.940 NVMe-MI: Not Supported 00:32:22.940 Virtualization Management: Not Supported 00:32:22.940 Doorbell Buffer Config: Not Supported 00:32:22.940 Get LBA Status Capability: Not Supported 00:32:22.940 Command & Feature Lockdown Capability: Not Supported 00:32:22.940 Abort Command Limit: 1 00:32:22.940 Async Event Request Limit: 1 00:32:22.940 Number of Firmware Slots: N/A 00:32:22.940 Firmware Slot 1 Read-Only: N/A 00:32:22.940 Firmware Activation Without Reset: N/A 00:32:22.940 Multiple Update Detection Support: N/A 00:32:22.940 Firmware Update Granularity: No Information Provided 00:32:22.940 Per-Namespace SMART Log: No 00:32:22.940 Asymmetric Namespace Access Log Page: Not Supported 00:32:22.940 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:22.940 Command Effects Log Page: Not Supported 00:32:22.940 Get Log Page Extended Data: Supported 00:32:22.940 Telemetry Log Pages: Not Supported 00:32:22.940 Persistent Event Log Pages: Not Supported 00:32:22.940 Supported Log Pages Log Page: May Support 00:32:22.940 Commands Supported & Effects Log Page: Not Supported 00:32:22.940 Feature Identifiers & Effects Log Page:May Support 00:32:22.940 NVMe-MI Commands & Effects Log Page: May Support 00:32:22.940 Data Area 4 for Telemetry Log: Not Supported 00:32:22.940 Error Log Page Entries Supported: 1 00:32:22.940 Keep Alive: Not Supported 00:32:22.940 00:32:22.940 NVM Command Set Attributes 00:32:22.940 ========================== 00:32:22.940 Submission Queue Entry Size 00:32:22.940 Max: 1 00:32:22.940 Min: 1 00:32:22.940 Completion Queue Entry Size 00:32:22.940 Max: 1 00:32:22.940 Min: 1 00:32:22.940 Number of Namespaces: 0 00:32:22.940 Compare Command: Not Supported 00:32:22.940 Write Uncorrectable Command: Not Supported 00:32:22.940 Dataset Management Command: Not Supported 00:32:22.940 Write Zeroes Command: Not Supported 00:32:22.940 Set Features Save Field: Not Supported 00:32:22.940 Reservations: Not Supported 00:32:22.940 Timestamp: Not Supported 00:32:22.940 Copy: Not Supported 00:32:22.940 Volatile Write Cache: Not Present 00:32:22.940 Atomic Write Unit (Normal): 1 00:32:22.940 Atomic Write Unit (PFail): 1 00:32:22.940 Atomic Compare & Write Unit: 1 00:32:22.940 Fused Compare & Write: Not Supported 00:32:22.940 Scatter-Gather List 00:32:22.940 SGL Command Set: Supported 00:32:22.940 SGL Keyed: Not Supported 00:32:22.940 SGL Bit Bucket Descriptor: Not Supported 00:32:22.940 SGL Metadata Pointer: Not Supported 00:32:22.940 Oversized SGL: Not Supported 00:32:22.940 SGL Metadata Address: Not Supported 00:32:22.940 SGL Offset: Supported 00:32:22.940 Transport SGL Data Block: Not Supported 00:32:22.940 Replay Protected Memory Block: Not Supported 00:32:22.940 00:32:22.940 Firmware Slot Information 00:32:22.940 ========================= 00:32:22.940 Active slot: 0 00:32:22.940 00:32:22.940 00:32:22.940 Error Log 00:32:22.940 ========= 00:32:22.940 00:32:22.940 Active Namespaces 00:32:22.940 ================= 00:32:22.940 Discovery Log Page 00:32:22.940 ================== 00:32:22.940 Generation Counter: 2 00:32:22.940 Number of Records: 2 00:32:22.940 Record Format: 0 00:32:22.940 00:32:22.940 Discovery Log Entry 0 00:32:22.940 ---------------------- 00:32:22.940 Transport Type: 3 (TCP) 00:32:22.940 Address Family: 1 (IPv4) 00:32:22.940 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:22.940 Entry Flags: 00:32:22.940 Duplicate Returned Information: 0 00:32:22.940 Explicit Persistent Connection Support for Discovery: 0 00:32:22.940 Transport Requirements: 00:32:22.940 Secure Channel: Not Specified 00:32:22.940 Port ID: 1 (0x0001) 00:32:22.940 Controller ID: 65535 (0xffff) 00:32:22.940 Admin Max SQ Size: 32 00:32:22.940 Transport Service Identifier: 4420 00:32:22.940 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:22.940 Transport Address: 10.0.0.1 00:32:22.940 Discovery Log Entry 1 00:32:22.940 ---------------------- 00:32:22.940 Transport Type: 3 (TCP) 00:32:22.940 Address Family: 1 (IPv4) 00:32:22.940 Subsystem Type: 2 (NVM Subsystem) 00:32:22.940 Entry Flags: 00:32:22.940 Duplicate Returned Information: 0 00:32:22.940 Explicit Persistent Connection Support for Discovery: 0 00:32:22.940 Transport Requirements: 00:32:22.940 Secure Channel: Not Specified 00:32:22.940 Port ID: 1 (0x0001) 00:32:22.940 Controller ID: 65535 (0xffff) 00:32:22.940 Admin Max SQ Size: 32 00:32:22.940 Transport Service Identifier: 4420 00:32:22.940 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:22.940 Transport Address: 10.0.0.1 00:32:22.940 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:22.940 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.200 get_feature(0x01) failed 00:32:23.200 get_feature(0x02) failed 00:32:23.200 get_feature(0x04) failed 00:32:23.200 ===================================================== 00:32:23.200 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:23.200 ===================================================== 00:32:23.200 Controller Capabilities/Features 00:32:23.200 ================================ 00:32:23.200 Vendor ID: 0000 00:32:23.200 Subsystem Vendor ID: 0000 00:32:23.200 Serial Number: d67393098c2c8e282b25 00:32:23.200 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:23.200 Firmware Version: 6.7.0-68 00:32:23.200 Recommended Arb Burst: 6 00:32:23.200 IEEE OUI Identifier: 00 00 00 00:32:23.200 Multi-path I/O 00:32:23.200 May have multiple subsystem ports: Yes 00:32:23.200 May have multiple controllers: Yes 00:32:23.200 Associated with SR-IOV VF: No 00:32:23.200 Max Data Transfer Size: Unlimited 00:32:23.200 Max Number of Namespaces: 1024 00:32:23.200 Max Number of I/O Queues: 128 00:32:23.200 NVMe Specification Version (VS): 1.3 00:32:23.200 NVMe Specification Version (Identify): 1.3 00:32:23.200 Maximum Queue Entries: 1024 00:32:23.200 Contiguous Queues Required: No 00:32:23.200 Arbitration Mechanisms Supported 00:32:23.200 Weighted Round Robin: Not Supported 00:32:23.200 Vendor Specific: Not Supported 00:32:23.200 Reset Timeout: 7500 ms 00:32:23.200 Doorbell Stride: 4 bytes 00:32:23.200 NVM Subsystem Reset: Not Supported 00:32:23.200 Command Sets Supported 00:32:23.200 NVM Command Set: Supported 00:32:23.200 Boot Partition: Not Supported 00:32:23.200 Memory Page Size Minimum: 4096 bytes 00:32:23.200 Memory Page Size Maximum: 4096 bytes 00:32:23.200 Persistent Memory Region: Not Supported 00:32:23.200 Optional Asynchronous Events Supported 00:32:23.200 Namespace Attribute Notices: Supported 00:32:23.200 Firmware Activation Notices: Not Supported 00:32:23.200 ANA Change Notices: Supported 00:32:23.200 PLE Aggregate Log Change Notices: Not Supported 00:32:23.200 LBA Status Info Alert Notices: Not Supported 00:32:23.200 EGE Aggregate Log Change Notices: Not Supported 00:32:23.200 Normal NVM Subsystem Shutdown event: Not Supported 00:32:23.200 Zone Descriptor Change Notices: Not Supported 00:32:23.200 Discovery Log Change Notices: Not Supported 00:32:23.200 Controller Attributes 00:32:23.200 128-bit Host Identifier: Supported 00:32:23.200 Non-Operational Permissive Mode: Not Supported 00:32:23.200 NVM Sets: Not Supported 00:32:23.200 Read Recovery Levels: Not Supported 00:32:23.200 Endurance Groups: Not Supported 00:32:23.200 Predictable Latency Mode: Not Supported 00:32:23.200 Traffic Based Keep ALive: Supported 00:32:23.200 Namespace Granularity: Not Supported 00:32:23.200 SQ Associations: Not Supported 00:32:23.200 UUID List: Not Supported 00:32:23.200 Multi-Domain Subsystem: Not Supported 00:32:23.200 Fixed Capacity Management: Not Supported 00:32:23.200 Variable Capacity Management: Not Supported 00:32:23.200 Delete Endurance Group: Not Supported 00:32:23.200 Delete NVM Set: Not Supported 00:32:23.200 Extended LBA Formats Supported: Not Supported 00:32:23.200 Flexible Data Placement Supported: Not Supported 00:32:23.200 00:32:23.200 Controller Memory Buffer Support 00:32:23.200 ================================ 00:32:23.200 Supported: No 00:32:23.200 00:32:23.200 Persistent Memory Region Support 00:32:23.200 ================================ 00:32:23.200 Supported: No 00:32:23.200 00:32:23.200 Admin Command Set Attributes 00:32:23.200 ============================ 00:32:23.200 Security Send/Receive: Not Supported 00:32:23.200 Format NVM: Not Supported 00:32:23.200 Firmware Activate/Download: Not Supported 00:32:23.201 Namespace Management: Not Supported 00:32:23.201 Device Self-Test: Not Supported 00:32:23.201 Directives: Not Supported 00:32:23.201 NVMe-MI: Not Supported 00:32:23.201 Virtualization Management: Not Supported 00:32:23.201 Doorbell Buffer Config: Not Supported 00:32:23.201 Get LBA Status Capability: Not Supported 00:32:23.201 Command & Feature Lockdown Capability: Not Supported 00:32:23.201 Abort Command Limit: 4 00:32:23.201 Async Event Request Limit: 4 00:32:23.201 Number of Firmware Slots: N/A 00:32:23.201 Firmware Slot 1 Read-Only: N/A 00:32:23.201 Firmware Activation Without Reset: N/A 00:32:23.201 Multiple Update Detection Support: N/A 00:32:23.201 Firmware Update Granularity: No Information Provided 00:32:23.201 Per-Namespace SMART Log: Yes 00:32:23.201 Asymmetric Namespace Access Log Page: Supported 00:32:23.201 ANA Transition Time : 10 sec 00:32:23.201 00:32:23.201 Asymmetric Namespace Access Capabilities 00:32:23.201 ANA Optimized State : Supported 00:32:23.201 ANA Non-Optimized State : Supported 00:32:23.201 ANA Inaccessible State : Supported 00:32:23.201 ANA Persistent Loss State : Supported 00:32:23.201 ANA Change State : Supported 00:32:23.201 ANAGRPID is not changed : No 00:32:23.201 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:23.201 00:32:23.201 ANA Group Identifier Maximum : 128 00:32:23.201 Number of ANA Group Identifiers : 128 00:32:23.201 Max Number of Allowed Namespaces : 1024 00:32:23.201 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:23.201 Command Effects Log Page: Supported 00:32:23.201 Get Log Page Extended Data: Supported 00:32:23.201 Telemetry Log Pages: Not Supported 00:32:23.201 Persistent Event Log Pages: Not Supported 00:32:23.201 Supported Log Pages Log Page: May Support 00:32:23.201 Commands Supported & Effects Log Page: Not Supported 00:32:23.201 Feature Identifiers & Effects Log Page:May Support 00:32:23.201 NVMe-MI Commands & Effects Log Page: May Support 00:32:23.201 Data Area 4 for Telemetry Log: Not Supported 00:32:23.201 Error Log Page Entries Supported: 128 00:32:23.201 Keep Alive: Supported 00:32:23.201 Keep Alive Granularity: 1000 ms 00:32:23.201 00:32:23.201 NVM Command Set Attributes 00:32:23.201 ========================== 00:32:23.201 Submission Queue Entry Size 00:32:23.201 Max: 64 00:32:23.201 Min: 64 00:32:23.201 Completion Queue Entry Size 00:32:23.201 Max: 16 00:32:23.201 Min: 16 00:32:23.201 Number of Namespaces: 1024 00:32:23.201 Compare Command: Not Supported 00:32:23.201 Write Uncorrectable Command: Not Supported 00:32:23.201 Dataset Management Command: Supported 00:32:23.201 Write Zeroes Command: Supported 00:32:23.201 Set Features Save Field: Not Supported 00:32:23.201 Reservations: Not Supported 00:32:23.201 Timestamp: Not Supported 00:32:23.201 Copy: Not Supported 00:32:23.201 Volatile Write Cache: Present 00:32:23.201 Atomic Write Unit (Normal): 1 00:32:23.201 Atomic Write Unit (PFail): 1 00:32:23.201 Atomic Compare & Write Unit: 1 00:32:23.201 Fused Compare & Write: Not Supported 00:32:23.201 Scatter-Gather List 00:32:23.201 SGL Command Set: Supported 00:32:23.201 SGL Keyed: Not Supported 00:32:23.201 SGL Bit Bucket Descriptor: Not Supported 00:32:23.201 SGL Metadata Pointer: Not Supported 00:32:23.201 Oversized SGL: Not Supported 00:32:23.201 SGL Metadata Address: Not Supported 00:32:23.201 SGL Offset: Supported 00:32:23.201 Transport SGL Data Block: Not Supported 00:32:23.201 Replay Protected Memory Block: Not Supported 00:32:23.201 00:32:23.201 Firmware Slot Information 00:32:23.201 ========================= 00:32:23.201 Active slot: 0 00:32:23.201 00:32:23.201 Asymmetric Namespace Access 00:32:23.201 =========================== 00:32:23.201 Change Count : 0 00:32:23.201 Number of ANA Group Descriptors : 1 00:32:23.201 ANA Group Descriptor : 0 00:32:23.201 ANA Group ID : 1 00:32:23.201 Number of NSID Values : 1 00:32:23.201 Change Count : 0 00:32:23.201 ANA State : 1 00:32:23.201 Namespace Identifier : 1 00:32:23.201 00:32:23.201 Commands Supported and Effects 00:32:23.201 ============================== 00:32:23.201 Admin Commands 00:32:23.201 -------------- 00:32:23.201 Get Log Page (02h): Supported 00:32:23.201 Identify (06h): Supported 00:32:23.201 Abort (08h): Supported 00:32:23.201 Set Features (09h): Supported 00:32:23.201 Get Features (0Ah): Supported 00:32:23.201 Asynchronous Event Request (0Ch): Supported 00:32:23.201 Keep Alive (18h): Supported 00:32:23.201 I/O Commands 00:32:23.201 ------------ 00:32:23.201 Flush (00h): Supported 00:32:23.201 Write (01h): Supported LBA-Change 00:32:23.201 Read (02h): Supported 00:32:23.201 Write Zeroes (08h): Supported LBA-Change 00:32:23.201 Dataset Management (09h): Supported 00:32:23.201 00:32:23.201 Error Log 00:32:23.201 ========= 00:32:23.201 Entry: 0 00:32:23.201 Error Count: 0x3 00:32:23.201 Submission Queue Id: 0x0 00:32:23.201 Command Id: 0x5 00:32:23.201 Phase Bit: 0 00:32:23.201 Status Code: 0x2 00:32:23.201 Status Code Type: 0x0 00:32:23.201 Do Not Retry: 1 00:32:23.201 Error Location: 0x28 00:32:23.201 LBA: 0x0 00:32:23.201 Namespace: 0x0 00:32:23.201 Vendor Log Page: 0x0 00:32:23.201 ----------- 00:32:23.201 Entry: 1 00:32:23.201 Error Count: 0x2 00:32:23.201 Submission Queue Id: 0x0 00:32:23.201 Command Id: 0x5 00:32:23.201 Phase Bit: 0 00:32:23.201 Status Code: 0x2 00:32:23.201 Status Code Type: 0x0 00:32:23.201 Do Not Retry: 1 00:32:23.201 Error Location: 0x28 00:32:23.201 LBA: 0x0 00:32:23.201 Namespace: 0x0 00:32:23.201 Vendor Log Page: 0x0 00:32:23.201 ----------- 00:32:23.201 Entry: 2 00:32:23.201 Error Count: 0x1 00:32:23.201 Submission Queue Id: 0x0 00:32:23.201 Command Id: 0x4 00:32:23.201 Phase Bit: 0 00:32:23.201 Status Code: 0x2 00:32:23.201 Status Code Type: 0x0 00:32:23.201 Do Not Retry: 1 00:32:23.201 Error Location: 0x28 00:32:23.201 LBA: 0x0 00:32:23.201 Namespace: 0x0 00:32:23.201 Vendor Log Page: 0x0 00:32:23.201 00:32:23.201 Number of Queues 00:32:23.201 ================ 00:32:23.201 Number of I/O Submission Queues: 128 00:32:23.201 Number of I/O Completion Queues: 128 00:32:23.201 00:32:23.201 ZNS Specific Controller Data 00:32:23.201 ============================ 00:32:23.201 Zone Append Size Limit: 0 00:32:23.201 00:32:23.201 00:32:23.201 Active Namespaces 00:32:23.201 ================= 00:32:23.201 get_feature(0x05) failed 00:32:23.201 Namespace ID:1 00:32:23.201 Command Set Identifier: NVM (00h) 00:32:23.201 Deallocate: Supported 00:32:23.201 Deallocated/Unwritten Error: Not Supported 00:32:23.201 Deallocated Read Value: Unknown 00:32:23.201 Deallocate in Write Zeroes: Not Supported 00:32:23.201 Deallocated Guard Field: 0xFFFF 00:32:23.201 Flush: Supported 00:32:23.201 Reservation: Not Supported 00:32:23.201 Namespace Sharing Capabilities: Multiple Controllers 00:32:23.201 Size (in LBAs): 1953525168 (931GiB) 00:32:23.201 Capacity (in LBAs): 1953525168 (931GiB) 00:32:23.201 Utilization (in LBAs): 1953525168 (931GiB) 00:32:23.201 UUID: c592f17e-8142-48f4-919d-c6a7e6ff33cf 00:32:23.201 Thin Provisioning: Not Supported 00:32:23.201 Per-NS Atomic Units: Yes 00:32:23.201 Atomic Boundary Size (Normal): 0 00:32:23.201 Atomic Boundary Size (PFail): 0 00:32:23.201 Atomic Boundary Offset: 0 00:32:23.201 NGUID/EUI64 Never Reused: No 00:32:23.201 ANA group ID: 1 00:32:23.201 Namespace Write Protected: No 00:32:23.201 Number of LBA Formats: 1 00:32:23.201 Current LBA Format: LBA Format #00 00:32:23.201 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:23.201 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:23.201 rmmod nvme_tcp 00:32:23.201 rmmod nvme_fabrics 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:23.201 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.202 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.202 16:53:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:25.113 16:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:27.013 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:27.014 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:27.014 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:27.580 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:27.838 00:32:27.838 real 0m10.085s 00:32:27.838 user 0m2.300s 00:32:27.838 sys 0m3.938s 00:32:27.838 16:53:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:27.838 16:53:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.838 ************************************ 00:32:27.838 END TEST nvmf_identify_kernel_target 00:32:27.838 ************************************ 00:32:27.838 16:53:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:27.838 16:53:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:27.838 16:53:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:27.838 16:53:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.838 ************************************ 00:32:27.838 START TEST nvmf_auth_host 00:32:27.838 ************************************ 00:32:27.838 16:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:27.838 * Looking for test storage... 00:32:27.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.838 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.838 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.839 16:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:30.368 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:30.368 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:30.368 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:30.369 Found net devices under 0000:09:00.0: cvl_0_0 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:30.369 Found net devices under 0000:09:00.1: cvl_0_1 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:30.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:32:30.369 00:32:30.369 --- 10.0.0.2 ping statistics --- 00:32:30.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.369 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:32:30.369 00:32:30.369 --- 10.0.0.1 ping statistics --- 00:32:30.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.369 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1920110 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1920110 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1920110 ']' 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:30.369 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e9f503aa15963b0b9424b03a9c82e706 00:32:30.627 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dvm 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e9f503aa15963b0b9424b03a9c82e706 0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e9f503aa15963b0b9424b03a9c82e706 0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e9f503aa15963b0b9424b03a9c82e706 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dvm 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dvm 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Dvm 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e15b3eebf81d18b8eb2c4ba11a7ce5423c913b85911d55bc2cb9832e3e47fa0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZMJ 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e15b3eebf81d18b8eb2c4ba11a7ce5423c913b85911d55bc2cb9832e3e47fa0 3 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e15b3eebf81d18b8eb2c4ba11a7ce5423c913b85911d55bc2cb9832e3e47fa0 3 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e15b3eebf81d18b8eb2c4ba11a7ce5423c913b85911d55bc2cb9832e3e47fa0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZMJ 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZMJ 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ZMJ 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36641a39521e4392032f33ffcaa6dbf4fa10097052951d6e 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cIZ 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36641a39521e4392032f33ffcaa6dbf4fa10097052951d6e 0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36641a39521e4392032f33ffcaa6dbf4fa10097052951d6e 0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36641a39521e4392032f33ffcaa6dbf4fa10097052951d6e 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.885 16:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cIZ 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cIZ 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cIZ 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ccf6d3d8c4a4be0476b0c3cd7108c359106be8119bef66cb 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fXt 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ccf6d3d8c4a4be0476b0c3cd7108c359106be8119bef66cb 2 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ccf6d3d8c4a4be0476b0c3cd7108c359106be8119bef66cb 2 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ccf6d3d8c4a4be0476b0c3cd7108c359106be8119bef66cb 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:30.885 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fXt 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fXt 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fXt 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=861d9cafad764c66a59d9c2cf365f965 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.exL 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 861d9cafad764c66a59d9c2cf365f965 1 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 861d9cafad764c66a59d9c2cf365f965 1 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=861d9cafad764c66a59d9c2cf365f965 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.exL 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.exL 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.exL 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:30.886 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d99c6f7c7daa0282e388c3ef6d913f6a 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Lwe 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d99c6f7c7daa0282e388c3ef6d913f6a 1 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d99c6f7c7daa0282e388c3ef6d913f6a 1 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d99c6f7c7daa0282e388c3ef6d913f6a 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Lwe 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Lwe 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Lwe 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a3305c56636344e4ba393f25eb3612ac036def25c3e62f2 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.a52 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a3305c56636344e4ba393f25eb3612ac036def25c3e62f2 2 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a3305c56636344e4ba393f25eb3612ac036def25c3e62f2 2 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a3305c56636344e4ba393f25eb3612ac036def25c3e62f2 00:32:31.143 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.a52 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.a52 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.a52 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=baba261ea3d3cf44e97f5f748ca6bbd0 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lVm 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key baba261ea3d3cf44e97f5f748ca6bbd0 0 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 baba261ea3d3cf44e97f5f748ca6bbd0 0 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=baba261ea3d3cf44e97f5f748ca6bbd0 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lVm 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lVm 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.lVm 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3193ac360eba71dd5b547dca3478214a8a0769f7dc3d354cd38fa117565834e7 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YUz 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3193ac360eba71dd5b547dca3478214a8a0769f7dc3d354cd38fa117565834e7 3 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3193ac360eba71dd5b547dca3478214a8a0769f7dc3d354cd38fa117565834e7 3 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3193ac360eba71dd5b547dca3478214a8a0769f7dc3d354cd38fa117565834e7 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YUz 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YUz 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.YUz 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1920110 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1920110 ']' 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:31.144 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Dvm 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ZMJ ]] 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZMJ 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cIZ 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.402 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fXt ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fXt 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.exL 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Lwe ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lwe 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.a52 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.lVm ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.lVm 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YUz 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:31.660 16:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:33.033 Waiting for block devices as requested 00:32:33.033 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:33.033 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:33.033 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:33.033 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:33.291 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:33.291 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:33.291 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:33.291 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:33.549 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:33.549 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:33.549 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:33.806 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:33.806 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:33.806 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:33.806 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:33.806 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:34.063 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:34.320 No valid GPT data, bailing 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:34.320 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:34.320 00:32:34.320 Discovery Log Number of Records 2, Generation counter 2 00:32:34.320 =====Discovery Log Entry 0====== 00:32:34.320 trtype: tcp 00:32:34.320 adrfam: ipv4 00:32:34.320 subtype: current discovery subsystem 00:32:34.320 treq: not specified, sq flow control disable supported 00:32:34.320 portid: 1 00:32:34.320 trsvcid: 4420 00:32:34.320 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:34.320 traddr: 10.0.0.1 00:32:34.320 eflags: none 00:32:34.320 sectype: none 00:32:34.320 =====Discovery Log Entry 1====== 00:32:34.320 trtype: tcp 00:32:34.320 adrfam: ipv4 00:32:34.320 subtype: nvme subsystem 00:32:34.320 treq: not specified, sq flow control disable supported 00:32:34.320 portid: 1 00:32:34.321 trsvcid: 4420 00:32:34.321 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:34.321 traddr: 10.0.0.1 00:32:34.321 eflags: none 00:32:34.321 sectype: none 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.321 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.579 nvme0n1 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.579 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.837 nvme0n1 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.837 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.838 nvme0n1 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.838 16:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.838 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.096 nvme0n1 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.096 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.097 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.355 nvme0n1 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.355 nvme0n1 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.355 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.356 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.615 nvme0n1 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.615 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 nvme0n1 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 16:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 nvme0n1 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.139 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.397 nvme0n1 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:36.397 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.398 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.657 nvme0n1 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.657 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.658 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.965 nvme0n1 00:32:36.965 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.965 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.965 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.965 16:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.965 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.965 16:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.965 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.966 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.224 nvme0n1 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.224 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.482 nvme0n1 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:37.482 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.740 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.998 nvme0n1 00:32:37.998 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.998 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.998 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.998 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.998 16:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.998 16:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.998 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.999 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.256 nvme0n1 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.256 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.257 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.822 nvme0n1 00:32:38.822 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.822 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.822 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.822 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.822 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.822 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.823 16:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.388 nvme0n1 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.388 16:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.954 nvme0n1 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.954 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.520 nvme0n1 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.520 16:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.085 nvme0n1 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:41.085 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.086 16:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.019 nvme0n1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.019 16:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.953 nvme0n1 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:42.953 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.210 16:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.142 nvme0n1 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:44.142 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.143 16:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.074 nvme0n1 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.074 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.007 nvme0n1 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.007 16:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.007 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:46.007 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.007 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 nvme0n1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.008 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.266 nvme0n1 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.266 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.524 nvme0n1 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.524 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.525 nvme0n1 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.525 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.783 nvme0n1 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.783 16:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.041 nvme0n1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.041 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.298 nvme0n1 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.298 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.299 nvme0n1 00:32:47.299 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.557 nvme0n1 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.557 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.816 nvme0n1 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.816 16:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.816 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.074 nvme0n1 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.074 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.332 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.333 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.333 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.333 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.591 nvme0n1 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.591 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.849 nvme0n1 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.849 16:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.107 nvme0n1 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.107 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.365 nvme0n1 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.365 16:53:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.623 16:53:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.623 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.623 16:53:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.881 nvme0n1 00:32:49.881 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.881 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.881 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.881 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.881 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.139 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.704 nvme0n1 00:32:50.704 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.704 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.705 16:53:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 nvme0n1 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.270 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.835 nvme0n1 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.835 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.836 16:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.401 nvme0n1 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.401 16:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.397 nvme0n1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.397 16:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.329 nvme0n1 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.329 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.330 16:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.261 nvme0n1 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.261 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.262 16:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.192 nvme0n1 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.192 16:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.123 nvme0n1 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.123 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.124 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.124 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.124 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.124 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.124 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.124 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.381 nvme0n1 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.381 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.382 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.640 nvme0n1 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.640 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 nvme0n1 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.899 16:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 nvme0n1 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.899 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.157 nvme0n1 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.157 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.158 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 nvme0n1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.415 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 nvme0n1 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 nvme0n1 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.673 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.931 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 16:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 nvme0n1 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.932 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.190 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.190 nvme0n1 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.191 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.448 nvme0n1 00:32:59.448 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.448 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.448 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.448 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.448 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.448 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.707 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.965 nvme0n1 00:32:59.965 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.965 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.965 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.965 16:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.965 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.965 16:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.965 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.223 nvme0n1 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.223 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.489 nvme0n1 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.489 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.490 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.756 nvme0n1 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:33:00.756 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.014 16:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.581 nvme0n1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.581 16:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.839 nvme0n1 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.839 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.097 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.355 nvme0n1 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.355 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.613 16:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.178 nvme0n1 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.178 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.743 nvme0n1 00:33:03.743 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.743 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.743 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTlmNTAzYWExNTk2M2IwYjk0MjRiMDNhOWM4MmU3MDZiVUBi: 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUxNWIzZWViZjgxZDE4YjhlYjJjNGJhMTFhN2NlNTQyM2M5MTNiODU5MTFkNTViYzJjYjk4MzJlM2U0N2ZhMCEQSuk=: 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.744 16:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.678 nvme0n1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.678 16:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.610 nvme0n1 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxZDljYWZhZDc2NGM2NmE1OWQ5YzJjZjM2NWY5NjXRHVST: 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk5YzZmN2M3ZGFhMDI4MmUzODhjM2VmNmQ5MTNmNmEbustx: 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.610 16:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 nvme0n1 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.541 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzMzA1YzU2NjM2MzQ0ZTRiYTM5M2YyNWViMzYxMmFjMDM2ZGVmMjVjM2U2MmYyO2q7Ug==: 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: ]] 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFiYTI2MWVhM2QzY2Y0NGU5N2Y1Zjc0OGNhNmJiZDBh1AK0: 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.799 16:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.730 nvme0n1 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzE5M2FjMzYwZWJhNzFkZDViNTQ3ZGNhMzQ3ODIxNGE4YTA3NjlmN2RjM2QzNTRjZDM4ZmExMTc1NjU4MzRlN/68yCs=: 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.730 16:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.664 nvme0n1 00:33:08.664 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.664 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2NDFhMzk1MjFlNDM5MjAzMmYzM2ZmY2FhNmRiZjRmYTEwMDk3MDUyOTUxZDZlWqgftw==: 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NmNmQzZDhjNGE0YmUwNDc2YjBjM2NkNzEwOGMzNTkxMDZiZTgxMTliZWY2NmNiUqIT5Q==: 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 request: 00:33:08.665 { 00:33:08.665 "name": "nvme0", 00:33:08.665 "trtype": "tcp", 00:33:08.665 "traddr": "10.0.0.1", 00:33:08.665 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:08.665 "adrfam": "ipv4", 00:33:08.665 "trsvcid": "4420", 00:33:08.665 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:08.665 "method": "bdev_nvme_attach_controller", 00:33:08.665 "req_id": 1 00:33:08.665 } 00:33:08.665 Got JSON-RPC error response 00:33:08.665 response: 00:33:08.665 { 00:33:08.665 "code": -32602, 00:33:08.665 "message": "Invalid parameters" 00:33:08.665 } 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 request: 00:33:08.665 { 00:33:08.665 "name": "nvme0", 00:33:08.665 "trtype": "tcp", 00:33:08.665 "traddr": "10.0.0.1", 00:33:08.665 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:08.665 "adrfam": "ipv4", 00:33:08.665 "trsvcid": "4420", 00:33:08.665 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:08.665 "dhchap_key": "key2", 00:33:08.665 "method": "bdev_nvme_attach_controller", 00:33:08.665 "req_id": 1 00:33:08.665 } 00:33:08.665 Got JSON-RPC error response 00:33:08.665 response: 00:33:08.665 { 00:33:08.665 "code": -32602, 00:33:08.665 "message": "Invalid parameters" 00:33:08.665 } 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:08.665 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.666 request: 00:33:08.666 { 00:33:08.666 "name": "nvme0", 00:33:08.666 "trtype": "tcp", 00:33:08.666 "traddr": "10.0.0.1", 00:33:08.666 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:08.666 "adrfam": "ipv4", 00:33:08.666 "trsvcid": "4420", 00:33:08.666 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:08.666 "dhchap_key": "key1", 00:33:08.666 "dhchap_ctrlr_key": "ckey2", 00:33:08.666 "method": "bdev_nvme_attach_controller", 00:33:08.666 "req_id": 1 00:33:08.666 } 00:33:08.666 Got JSON-RPC error response 00:33:08.666 response: 00:33:08.666 { 00:33:08.666 "code": -32602, 00:33:08.666 "message": "Invalid parameters" 00:33:08.666 } 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:08.666 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:08.666 rmmod nvme_tcp 00:33:08.923 rmmod nvme_fabrics 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1920110 ']' 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1920110 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1920110 ']' 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1920110 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1920110 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1920110' 00:33:08.923 killing process with pid 1920110 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1920110 00:33:08.923 16:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1920110 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.221 16:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:11.128 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:11.129 16:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:12.499 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:12.499 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:12.499 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:13.432 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:13.690 16:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Dvm /tmp/spdk.key-null.cIZ /tmp/spdk.key-sha256.exL /tmp/spdk.key-sha384.a52 /tmp/spdk.key-sha512.YUz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:13.690 16:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.624 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:14.624 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:14.624 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:14.624 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:14.624 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:14.624 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:14.881 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:14.881 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:14.881 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:14.881 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:14.881 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:14.881 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:14.881 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:14.881 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:14.881 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:14.881 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:14.881 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:14.881 00:33:14.881 real 0m47.111s 00:33:14.881 user 0m44.121s 00:33:14.881 sys 0m6.144s 00:33:14.881 16:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:14.881 16:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.881 ************************************ 00:33:14.881 END TEST nvmf_auth_host 00:33:14.881 ************************************ 00:33:14.881 16:54:22 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:33:14.881 16:54:22 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.881 16:54:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:14.881 16:54:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:14.881 16:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.881 ************************************ 00:33:14.881 START TEST nvmf_digest 00:33:14.881 ************************************ 00:33:14.881 16:54:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:15.140 * Looking for test storage... 00:33:15.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:15.140 16:54:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:17.667 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:17.667 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:17.667 Found net devices under 0000:09:00.0: cvl_0_0 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:17.667 Found net devices under 0000:09:00.1: cvl_0_1 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:17.667 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:17.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:33:17.668 00:33:17.668 --- 10.0.0.2 ping statistics --- 00:33:17.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.668 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:33:17.668 00:33:17.668 --- 10.0.0.1 ping statistics --- 00:33:17.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.668 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:17.668 ************************************ 00:33:17.668 START TEST nvmf_digest_clean 00:33:17.668 ************************************ 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1930457 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1930457 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1930457 ']' 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:17.668 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.668 [2024-05-15 16:54:24.790025] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:17.668 [2024-05-15 16:54:24.790105] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.668 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.668 [2024-05-15 16:54:24.863804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.926 [2024-05-15 16:54:24.947981] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.926 [2024-05-15 16:54:24.948042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.926 [2024-05-15 16:54:24.948055] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.926 [2024-05-15 16:54:24.948066] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.926 [2024-05-15 16:54:24.948076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.926 [2024-05-15 16:54:24.948112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.926 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:17.926 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:17.926 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:17.926 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:17.926 16:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.926 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.926 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:17.926 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:17.926 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:17.926 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.926 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.926 null0 00:33:17.926 [2024-05-15 16:54:25.132886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.183 [2024-05-15 16:54:25.156867] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:18.183 [2024-05-15 16:54:25.157169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1930483 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1930483 /var/tmp/bperf.sock 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1930483 ']' 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:18.183 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.183 [2024-05-15 16:54:25.201892] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:18.183 [2024-05-15 16:54:25.201953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930483 ] 00:33:18.183 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.183 [2024-05-15 16:54:25.271813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.183 [2024-05-15 16:54:25.358936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.440 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:18.440 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:18.440 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:18.440 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:18.440 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:18.698 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:18.698 16:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.264 nvme0n1 00:33:19.264 16:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:19.264 16:54:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.264 Running I/O for 2 seconds... 00:33:21.163 00:33:21.163 Latency(us) 00:33:21.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.163 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:21.163 nvme0n1 : 2.00 17953.99 70.13 0.00 0.00 7118.98 3786.52 15340.28 00:33:21.163 =================================================================================================================== 00:33:21.163 Total : 17953.99 70.13 0.00 0.00 7118.98 3786.52 15340.28 00:33:21.163 0 00:33:21.163 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:21.163 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:21.164 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:21.164 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:21.164 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:21.164 | select(.opcode=="crc32c") 00:33:21.164 | "\(.module_name) \(.executed)"' 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1930483 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1930483 ']' 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1930483 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:21.421 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1930483 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1930483' 00:33:21.678 killing process with pid 1930483 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1930483 00:33:21.678 Received shutdown signal, test time was about 2.000000 seconds 00:33:21.678 00:33:21.678 Latency(us) 00:33:21.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.678 =================================================================================================================== 00:33:21.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1930483 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1930975 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1930975 /var/tmp/bperf.sock 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1930975 ']' 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:21.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:21.678 16:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.936 [2024-05-15 16:54:28.909522] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:21.936 [2024-05-15 16:54:28.909617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930975 ] 00:33:21.936 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:21.936 Zero copy mechanism will not be used. 00:33:21.936 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.936 [2024-05-15 16:54:28.981910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.936 [2024-05-15 16:54:29.067481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.936 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:21.936 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:21.936 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:21.936 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:21.936 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.500 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.500 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.757 nvme0n1 00:33:22.757 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:22.757 16:54:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:22.757 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:22.757 Zero copy mechanism will not be used. 00:33:22.757 Running I/O for 2 seconds... 00:33:25.280 00:33:25.280 Latency(us) 00:33:25.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:25.280 nvme0n1 : 2.00 3656.78 457.10 0.00 0.00 4370.50 1462.42 11213.94 00:33:25.280 =================================================================================================================== 00:33:25.280 Total : 3656.78 457.10 0.00 0.00 4370.50 1462.42 11213.94 00:33:25.280 0 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:25.280 | select(.opcode=="crc32c") 00:33:25.280 | "\(.module_name) \(.executed)"' 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1930975 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1930975 ']' 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1930975 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1930975 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1930975' 00:33:25.280 killing process with pid 1930975 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1930975 00:33:25.280 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.280 00:33:25.280 Latency(us) 00:33:25.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.280 =================================================================================================================== 00:33:25.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.280 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1930975 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1931411 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1931411 /var/tmp/bperf.sock 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1931411 ']' 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.538 [2024-05-15 16:54:32.557168] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:25.538 [2024-05-15 16:54:32.557279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931411 ] 00:33:25.538 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.538 [2024-05-15 16:54:32.626536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.538 [2024-05-15 16:54:32.707490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:25.538 16:54:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.103 16:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.103 16:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.360 nvme0n1 00:33:26.360 16:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:26.360 16:54:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.360 Running I/O for 2 seconds... 00:33:28.884 00:33:28.884 Latency(us) 00:33:28.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.884 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.884 nvme0n1 : 2.01 19812.65 77.39 0.00 0.00 6444.58 2779.21 16505.36 00:33:28.884 =================================================================================================================== 00:33:28.884 Total : 19812.65 77.39 0.00 0.00 6444.58 2779.21 16505.36 00:33:28.884 0 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:28.884 | select(.opcode=="crc32c") 00:33:28.884 | "\(.module_name) \(.executed)"' 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:28.884 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1931411 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1931411 ']' 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1931411 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1931411 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1931411' 00:33:28.885 killing process with pid 1931411 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1931411 00:33:28.885 Received shutdown signal, test time was about 2.000000 seconds 00:33:28.885 00:33:28.885 Latency(us) 00:33:28.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.885 =================================================================================================================== 00:33:28.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.885 16:54:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1931411 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1931817 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1931817 /var/tmp/bperf.sock 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1931817 ']' 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:28.885 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:29.152 [2024-05-15 16:54:36.128195] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:29.152 [2024-05-15 16:54:36.128280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931817 ] 00:33:29.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.152 Zero copy mechanism will not be used. 00:33:29.152 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.152 [2024-05-15 16:54:36.198730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.152 [2024-05-15 16:54:36.282408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.152 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:29.152 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:29.152 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.152 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.152 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:29.758 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.758 16:54:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.016 nvme0n1 00:33:30.016 16:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:30.016 16:54:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.274 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.274 Zero copy mechanism will not be used. 00:33:30.274 Running I/O for 2 seconds... 00:33:32.172 00:33:32.172 Latency(us) 00:33:32.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.172 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:32.172 nvme0n1 : 2.00 3486.53 435.82 0.00 0.00 4578.24 3034.07 13301.38 00:33:32.172 =================================================================================================================== 00:33:32.172 Total : 3486.53 435.82 0.00 0.00 4578.24 3034.07 13301.38 00:33:32.172 0 00:33:32.172 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:32.172 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:32.172 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:32.172 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:32.172 | select(.opcode=="crc32c") 00:33:32.172 | "\(.module_name) \(.executed)"' 00:33:32.172 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1931817 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1931817 ']' 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1931817 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1931817 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1931817' 00:33:32.429 killing process with pid 1931817 00:33:32.429 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1931817 00:33:32.430 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.430 00:33:32.430 Latency(us) 00:33:32.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.430 =================================================================================================================== 00:33:32.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.430 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1931817 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1930457 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1930457 ']' 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1930457 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1930457 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1930457' 00:33:32.687 killing process with pid 1930457 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1930457 00:33:32.687 [2024-05-15 16:54:39.832279] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:32.687 16:54:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1930457 00:33:32.944 00:33:32.944 real 0m15.315s 00:33:32.944 user 0m30.438s 00:33:32.944 sys 0m4.116s 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:32.944 ************************************ 00:33:32.944 END TEST nvmf_digest_clean 00:33:32.944 ************************************ 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:32.944 ************************************ 00:33:32.944 START TEST nvmf_digest_error 00:33:32.944 ************************************ 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1932260 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1932260 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1932260 ']' 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.944 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.944 [2024-05-15 16:54:40.161655] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:32.944 [2024-05-15 16:54:40.161755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.201 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.201 [2024-05-15 16:54:40.244634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.201 [2024-05-15 16:54:40.333880] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.201 [2024-05-15 16:54:40.333957] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.201 [2024-05-15 16:54:40.333974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.201 [2024-05-15 16:54:40.333987] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.201 [2024-05-15 16:54:40.333999] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.201 [2024-05-15 16:54:40.334035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.201 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.201 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:33.201 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.202 [2024-05-15 16:54:40.406646] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.202 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.459 null0 00:33:33.459 [2024-05-15 16:54:40.525545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.459 [2024-05-15 16:54:40.549526] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:33.459 [2024-05-15 16:54:40.549840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1932396 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1932396 /var/tmp/bperf.sock 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1932396 ']' 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:33.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:33.459 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.459 [2024-05-15 16:54:40.597112] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:33.459 [2024-05-15 16:54:40.597189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932396 ] 00:33:33.459 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.459 [2024-05-15 16:54:40.673059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.717 [2024-05-15 16:54:40.761723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.717 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.717 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:33.717 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.717 16:54:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.974 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:33.974 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.974 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.974 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.974 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.974 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.539 nvme0n1 00:33:34.539 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:34.539 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.539 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.539 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.539 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:34.539 16:54:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:34.539 Running I/O for 2 seconds... 00:33:34.539 [2024-05-15 16:54:41.627878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.627926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.627954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.641448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.641479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.641506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.655765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.655797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.655821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.668800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.668832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.668857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.681255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.681301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.695390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.695422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.695441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.710322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.710354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.710372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.722239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.722269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.722289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.739058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.739092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.739112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.539 [2024-05-15 16:54:41.755316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.539 [2024-05-15 16:54:41.755345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.539 [2024-05-15 16:54:41.755364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.766860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.766894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.766914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.783111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.783144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.783164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.795314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.795341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.795367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.811817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.811860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.811880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.825454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.825500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.825519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.840292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.840322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.840339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.853587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.853620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.853639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.866063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.866093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.866110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.879251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.879295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.879313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.893610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.893655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.893673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.905599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.905631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.905650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.922146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.922174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.922194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.935646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.935677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.935694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.946354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.946402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.962770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.962801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.962820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.977963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.977998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.978017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:41.992018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:41.992048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:41.992068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:42.004693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:42.004723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:42.004742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.797 [2024-05-15 16:54:42.020700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:34.797 [2024-05-15 16:54:42.020734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.797 [2024-05-15 16:54:42.020753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.034144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.034174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.034195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.046347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.046377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.046405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.063258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.063318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.063337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.075042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.075071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.075087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.090373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.090407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.090442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.106464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.106531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.118701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.118736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.118756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.132642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.132677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.132697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.147027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.147062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.147081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.161324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.161355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.161387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.176290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.176318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.176334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.188078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.188112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.188131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.204618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.204654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.204673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.220754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.220783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.220799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.232758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.232806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.232826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.247521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.247566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.055 [2024-05-15 16:54:42.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.055 [2024-05-15 16:54:42.263400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.055 [2024-05-15 16:54:42.263445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.056 [2024-05-15 16:54:42.263461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.056 [2024-05-15 16:54:42.275649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.056 [2024-05-15 16:54:42.275683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.056 [2024-05-15 16:54:42.275702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.313 [2024-05-15 16:54:42.292043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.313 [2024-05-15 16:54:42.292078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.313 [2024-05-15 16:54:42.292102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.313 [2024-05-15 16:54:42.303389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.313 [2024-05-15 16:54:42.303417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.313 [2024-05-15 16:54:42.303432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.319506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.319535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.319551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.331171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.331205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.331235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.346189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.346244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.346268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.361635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.361665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.361683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.374066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.374112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.374131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.389431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.389462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.389494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.405454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.405484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.405502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.417741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.417791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.417810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.435018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.435053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.435072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.446130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.446165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.446183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.461747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.461802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.475329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.475360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.475377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.487121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.487151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.487168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.503166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.503196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.503212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.516036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.516100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.314 [2024-05-15 16:54:42.532368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.314 [2024-05-15 16:54:42.532413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.314 [2024-05-15 16:54:42.532430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.547799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.547829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.547846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.561119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.561154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.561173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.577375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.577423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.577441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.591435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.591466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.591483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.603432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.603460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.603491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.619349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.619377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.619408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.631262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.631295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.631315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.647208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.647246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.647263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.663132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.663164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.663187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.675674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.675703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.675736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.692226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.692260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.692279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.704447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.704503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.704519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.717883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.717914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.717931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.733415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.733445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.733463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.745257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.745294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.745326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.762068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.572 [2024-05-15 16:54:42.762102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.572 [2024-05-15 16:54:42.762121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.572 [2024-05-15 16:54:42.777439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.573 [2024-05-15 16:54:42.777468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.573 [2024-05-15 16:54:42.777486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.573 [2024-05-15 16:54:42.789424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.573 [2024-05-15 16:54:42.789460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.573 [2024-05-15 16:54:42.789478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.847 [2024-05-15 16:54:42.805010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.847 [2024-05-15 16:54:42.805039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.805070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.817191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.817231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.817252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.830562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.830595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.830613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.845994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.846039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.846059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.859522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.859552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.859569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.871104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.871137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.871156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.885736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.885766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.885784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.897451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.897484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.897508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.914225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.914270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.914287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.927527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.927572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.927589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.941510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.941566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.957332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.957360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.957392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.968499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.968541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.968556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.983851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.983881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:42.983898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:42.999948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:42.999981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:43.000000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:43.013848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:43.013878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:43.013895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:43.025518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:43.025553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:43.025571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:43.043704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:43.043747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:43.043764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.848 [2024-05-15 16:54:43.055615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:35.848 [2024-05-15 16:54:43.055648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.848 [2024-05-15 16:54:43.055667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.069827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.069856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.069888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.084121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.084157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.084176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.096982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.097011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.097044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.110978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.111014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.111033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.125334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.125374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.125391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.138304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.138334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.138350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.149947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.149975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.150008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.164411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.164441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.164458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.177818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.177848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.177865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.189843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.189873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.189889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.202870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.202898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.202928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.215536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.215563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.215593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.229309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.229338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.229356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.240097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.240123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.240154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.254667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.254712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.254736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.266874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.266904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.266920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.279161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.279207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.290697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.290725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.290757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.305091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.305121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.305139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.317119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.317148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.317165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.112 [2024-05-15 16:54:43.329582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.112 [2024-05-15 16:54:43.329609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.112 [2024-05-15 16:54:43.329640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.372 [2024-05-15 16:54:43.341730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.372 [2024-05-15 16:54:43.341758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.372 [2024-05-15 16:54:43.341774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.372 [2024-05-15 16:54:43.355653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.372 [2024-05-15 16:54:43.355680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.372 [2024-05-15 16:54:43.355712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.368573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.368609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.368628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.379126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.379152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.379183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.394122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.394151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.394182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.408575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.408603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.408634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.419363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.419406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.419421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.433273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.433303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.448106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.448135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.448152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.459113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.459140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.459171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.473605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.473650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.473667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.487039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.487070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.487088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.498555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.498582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.498612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.510950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.510977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.511008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.525206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.525241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.525258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.538482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.538524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.538539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.551294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.551323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.551341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.564438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.564467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.564484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.577080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.577109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.577126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.373 [2024-05-15 16:54:43.588415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.373 [2024-05-15 16:54:43.588453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.373 [2024-05-15 16:54:43.588485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.631 [2024-05-15 16:54:43.602459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.631 [2024-05-15 16:54:43.602513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.631 [2024-05-15 16:54:43.602530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.631 [2024-05-15 16:54:43.614623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3a420) 00:33:36.631 [2024-05-15 16:54:43.614653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.631 [2024-05-15 16:54:43.614671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.631 00:33:36.631 Latency(us) 00:33:36.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.631 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:36.631 nvme0n1 : 2.05 18111.45 70.75 0.00 0.00 6920.38 3422.44 47768.46 00:33:36.631 =================================================================================================================== 00:33:36.631 Total : 18111.45 70.75 0.00 0.00 6920.38 3422.44 47768.46 00:33:36.631 0 00:33:36.631 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:36.631 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:36.631 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:36.631 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:36.631 | .driver_specific 00:33:36.631 | .nvme_error 00:33:36.631 | .status_code 00:33:36.631 | .command_transient_transport_error' 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1932396 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1932396 ']' 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1932396 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1932396 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1932396' 00:33:36.888 killing process with pid 1932396 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1932396 00:33:36.888 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.888 00:33:36.888 Latency(us) 00:33:36.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.888 =================================================================================================================== 00:33:36.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.888 16:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1932396 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1932805 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1932805 /var/tmp/bperf.sock 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1932805 ']' 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.146 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.146 [2024-05-15 16:54:44.225133] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:37.146 [2024-05-15 16:54:44.225205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932805 ] 00:33:37.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.146 Zero copy mechanism will not be used. 00:33:37.146 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.146 [2024-05-15 16:54:44.291589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.404 [2024-05-15 16:54:44.375397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.404 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.404 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.404 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.404 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.661 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:37.661 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.661 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.661 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.661 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.661 16:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.919 nvme0n1 00:33:37.919 16:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:37.919 16:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.919 16:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.919 16:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.919 16:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:37.919 16:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.177 Zero copy mechanism will not be used. 00:33:38.177 Running I/O for 2 seconds... 00:33:38.177 [2024-05-15 16:54:45.261500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.261571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.261593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.269981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.270016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.270035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.278275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.278304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.278337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.286460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.286488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.286504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.294678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.294710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.294728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.302861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.302889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.302906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.310948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.310979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.177 [2024-05-15 16:54:45.310998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.177 [2024-05-15 16:54:45.319480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.177 [2024-05-15 16:54:45.319508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.319524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.328289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.328347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.336947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.336979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.336997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.345535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.345576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.345592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.354281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.354311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.354329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.362745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.362773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.362789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.371205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.371244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.371276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.380109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.380141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.380160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.388962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.388996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.389024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.178 [2024-05-15 16:54:45.397293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.178 [2024-05-15 16:54:45.397326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.178 [2024-05-15 16:54:45.397342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.436 [2024-05-15 16:54:45.405956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.436 [2024-05-15 16:54:45.405993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.436 [2024-05-15 16:54:45.406012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.436 [2024-05-15 16:54:45.414748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.436 [2024-05-15 16:54:45.414786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.436 [2024-05-15 16:54:45.414806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.436 [2024-05-15 16:54:45.423435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.436 [2024-05-15 16:54:45.423469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.436 [2024-05-15 16:54:45.423486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.436 [2024-05-15 16:54:45.432229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.436 [2024-05-15 16:54:45.432291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.436 [2024-05-15 16:54:45.432308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.436 [2024-05-15 16:54:45.441134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.436 [2024-05-15 16:54:45.441168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.441187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.449725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.449761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.449780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.458350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.458381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.458399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.467015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.467047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.467066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.475293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.475325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.475342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.483284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.483313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.483330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.491327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.491372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.491389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.499509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.499557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.499576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.507807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.507839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.507856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.515893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.515926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.515945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.523965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.523993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.524010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.532522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.532551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.532595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.541307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.541336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.541352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.549900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.549932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.549950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.558129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.558161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.558180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.566842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.566874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.566892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.575616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.575647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.575665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.584228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.584272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.584288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.593367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.593395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.593427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.602090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.602122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.602141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.610645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.610682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.610702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.619369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.619398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.619414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.628304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.628350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.636997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.637029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.637047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.645913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.645945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.645963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.654686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.654718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.654736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.437 [2024-05-15 16:54:45.663367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.437 [2024-05-15 16:54:45.663395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.437 [2024-05-15 16:54:45.663411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.695 [2024-05-15 16:54:45.671998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.695 [2024-05-15 16:54:45.672030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.695 [2024-05-15 16:54:45.672048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.695 [2024-05-15 16:54:45.680776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.695 [2024-05-15 16:54:45.680809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.695 [2024-05-15 16:54:45.680828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.695 [2024-05-15 16:54:45.689553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.689600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.689619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.698347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.698377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.698394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.708030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.708064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.708083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.718413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.718460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.718477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.729054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.729089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.729108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.739661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.739696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.739715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.750157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.750187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.750228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.760555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.760598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.760615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.770871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.770915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.770938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.781108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.781174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.791208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.791249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.791269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.801475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.801534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.801554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.810833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.810882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.810902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.820749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.820778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.820812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.830396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.830427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.830445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.841043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.841072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.841104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.851641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.851676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.851696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.861874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.861908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.861927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.871623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.871658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.871678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.881543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.881578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.881598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.890912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.890943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.890961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.899738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.899768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.899800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.907909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.907937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.907968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.696 [2024-05-15 16:54:45.915963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.696 [2024-05-15 16:54:45.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.696 [2024-05-15 16:54:45.916023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.923945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.923974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.923990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.932014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.932043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.932065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.939913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.939955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.939972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.948087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.948116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.948147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.956238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.956266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.956283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.964379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.964409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.964425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.972480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.972523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.972540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.980819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.980862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.980878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.988855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.988883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.988916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.955 [2024-05-15 16:54:45.996841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.955 [2024-05-15 16:54:45.996870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.955 [2024-05-15 16:54:45.996886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.004767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.004800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.004833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.012863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.012891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.012908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.020876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.020917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.020933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.029428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.029472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.029489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.037643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.037671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.037703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.045671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.045713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.045729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.053756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.053783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.053815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.061846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.061873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.061904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.069955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.069982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.070012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.078070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.078098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.078115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.085962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.085989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.086020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.093934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.093962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.093979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.101978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.102009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.102027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.110279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.110308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.110339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.118477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.118506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.118537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.126707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.126736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.126768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.134859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.134887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.134919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.142823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.142860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.142879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.150995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.151023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.151055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.159025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.159052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.159083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.167211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.167244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.167276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.956 [2024-05-15 16:54:46.175231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:38.956 [2024-05-15 16:54:46.175258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.956 [2024-05-15 16:54:46.175292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.183199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.183237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.183255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.191308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.191336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.191353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.199348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.199376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.199393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.207531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.207558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.207574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.215941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.215972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.215991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.224428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.224456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.224489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.232104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.232133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.240101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.240128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.240159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.248304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.248333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.248349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.256445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.256472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.256504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.264626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.264667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.264684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.272946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.272973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.273005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.281196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.281246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.281270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.289397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.289424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.289456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.297974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.298002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.298018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.306194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.306233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.306253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.314507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.314551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.314568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.322817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.322845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.322876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.331092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.331123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.331142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.339375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.339421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.347882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.347910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.347942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.355883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.355916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.355948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.216 [2024-05-15 16:54:46.364018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.216 [2024-05-15 16:54:46.364047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.216 [2024-05-15 16:54:46.364064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.372123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.372151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.372183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.380314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.380343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.380360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.388537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.388581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.388596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.396772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.396804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.396822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.404987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.405030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.413004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.413031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.413062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.420998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.421025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.421058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.428918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.428949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.428968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.217 [2024-05-15 16:54:46.436944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.217 [2024-05-15 16:54:46.436971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.217 [2024-05-15 16:54:46.436987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.444866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.444907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.444923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.453358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.453385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.453417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.463212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.463253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.463272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.473378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.473407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.473440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.483117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.483151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.483170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.493518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.493553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.493572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.503815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.503851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.503885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.513888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.513922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.513941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.524230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.524259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.524290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.533404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.533434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.533451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.543015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.543051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.543070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.552791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.552825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.552844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.563238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.563284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.563300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.572719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.572763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.572780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.582888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.582923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.582942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.593267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.593298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.593315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.603259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.476 [2024-05-15 16:54:46.603290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.476 [2024-05-15 16:54:46.603307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.476 [2024-05-15 16:54:46.612722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.612751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.612783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.623865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.623896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.623913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.634086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.634121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.634140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.644060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.644094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.644114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.654090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.654125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.654144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.664040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.664074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.664094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.674186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.674228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.674255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.684716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.684751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.477 [2024-05-15 16:54:46.694609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.477 [2024-05-15 16:54:46.694643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.477 [2024-05-15 16:54:46.694663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.704359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.704404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.704420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.714689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.714724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.714743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.724245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.724275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.724292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.733128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.733170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.733186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.742611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.742646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.742664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.752762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.752793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.752824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.762381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.762418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.762436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.771876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.771942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.781572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.781607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.781627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.791110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.791144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.791163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.799180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.799221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.799242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.807808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.807843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.807862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.817706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.817742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.817761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.828743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.828789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.828809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.839495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.839526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.839544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.850099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.850134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.850153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.860402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.860446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.860462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.870089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.870125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.870144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.880160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.880195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.880214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.891158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.891192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.891212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.902202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.902243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.736 [2024-05-15 16:54:46.902278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.736 [2024-05-15 16:54:46.912835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.736 [2024-05-15 16:54:46.912871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.737 [2024-05-15 16:54:46.912891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.737 [2024-05-15 16:54:46.923171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.737 [2024-05-15 16:54:46.923205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.737 [2024-05-15 16:54:46.923238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.737 [2024-05-15 16:54:46.929189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.737 [2024-05-15 16:54:46.929234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.737 [2024-05-15 16:54:46.929275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.737 [2024-05-15 16:54:46.938126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.737 [2024-05-15 16:54:46.938160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.737 [2024-05-15 16:54:46.938180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.737 [2024-05-15 16:54:46.949035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.737 [2024-05-15 16:54:46.949069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.737 [2024-05-15 16:54:46.949088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.737 [2024-05-15 16:54:46.958610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.737 [2024-05-15 16:54:46.958642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.737 [2024-05-15 16:54:46.958660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:46.967543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:46.967577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:46.967597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:46.977680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:46.977714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:46.977734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:46.988094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:46.988128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:46.988148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:46.998454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:46.998483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:46.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:47.008819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:47.008855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:47.008874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:47.019078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:47.019114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:47.019133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.995 [2024-05-15 16:54:47.028608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.995 [2024-05-15 16:54:47.028643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.995 [2024-05-15 16:54:47.028661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.037408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.037438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.037469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.046476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.046520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.046539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.055167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.055212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.055242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.063460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.063489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.063528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.071962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.071994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.072013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.080435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.080475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.080507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.088720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.088753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.088779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.097192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.097233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.097253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.105408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.105435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.105467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.114573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.114608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.114628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.123830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.123864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.123885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.133372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.133402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.133418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.142755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.142799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.142818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.151423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.151452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.151467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.160234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.160276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.160295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.168898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.168937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.168956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.177943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.177975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.177993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.186874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.186907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.186925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.195778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.195810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.195828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.205072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.205105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.205124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.996 [2024-05-15 16:54:47.214681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:39.996 [2024-05-15 16:54:47.214715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.996 [2024-05-15 16:54:47.214734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.254 [2024-05-15 16:54:47.224047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:40.254 [2024-05-15 16:54:47.224081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.254 [2024-05-15 16:54:47.224101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.254 [2024-05-15 16:54:47.232831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:40.254 [2024-05-15 16:54:47.232864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.255 [2024-05-15 16:54:47.232882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.255 [2024-05-15 16:54:47.241424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:40.255 [2024-05-15 16:54:47.241454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.255 [2024-05-15 16:54:47.241486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.255 [2024-05-15 16:54:47.249687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:40.255 [2024-05-15 16:54:47.249715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.255 [2024-05-15 16:54:47.249731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.255 [2024-05-15 16:54:47.257539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f51f0) 00:33:40.255 [2024-05-15 16:54:47.257583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.255 [2024-05-15 16:54:47.257600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.255 00:33:40.255 Latency(us) 00:33:40.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.255 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:40.255 nvme0n1 : 2.00 3470.78 433.85 0.00 0.00 4603.75 1444.22 11456.66 00:33:40.255 =================================================================================================================== 00:33:40.255 Total : 3470.78 433.85 0.00 0.00 4603.75 1444.22 11456.66 00:33:40.255 0 00:33:40.255 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:40.255 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:40.255 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:40.255 | .driver_specific 00:33:40.255 | .nvme_error 00:33:40.255 | .status_code 00:33:40.255 | .command_transient_transport_error' 00:33:40.255 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:40.513 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:33:40.513 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1932805 00:33:40.513 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1932805 ']' 00:33:40.513 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1932805 00:33:40.513 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:40.513 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:40.514 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1932805 00:33:40.514 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:40.514 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:40.514 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1932805' 00:33:40.514 killing process with pid 1932805 00:33:40.514 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1932805 00:33:40.514 Received shutdown signal, test time was about 2.000000 seconds 00:33:40.514 00:33:40.514 Latency(us) 00:33:40.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.514 =================================================================================================================== 00:33:40.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.514 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1932805 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1933209 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1933209 /var/tmp/bperf.sock 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1933209 ']' 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:40.772 16:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.772 [2024-05-15 16:54:47.790060] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:40.772 [2024-05-15 16:54:47.790137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933209 ] 00:33:40.772 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.772 [2024-05-15 16:54:47.856510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.772 [2024-05-15 16:54:47.937967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.030 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:41.030 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:41.030 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.030 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.287 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:41.287 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.287 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.287 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.287 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.287 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.544 nvme0n1 00:33:41.545 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:41.545 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.545 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.545 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.545 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:41.545 16:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.803 Running I/O for 2 seconds... 00:33:41.803 [2024-05-15 16:54:48.852664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ed920 00:33:41.803 [2024-05-15 16:54:48.853783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.853829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.864722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f9f68 00:33:41.803 [2024-05-15 16:54:48.865810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.865844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.877952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3d08 00:33:41.803 [2024-05-15 16:54:48.879235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.879280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.892047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ef270 00:33:41.803 [2024-05-15 16:54:48.893494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.893523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.903717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ebfd0 00:33:41.803 [2024-05-15 16:54:48.905154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.905186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.916832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e6fa8 00:33:41.803 [2024-05-15 16:54:48.918432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.918460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.928426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fb048 00:33:41.803 [2024-05-15 16:54:48.929508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.929536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.941025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ec840 00:33:41.803 [2024-05-15 16:54:48.941942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.941974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.954125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e84c0 00:33:41.803 [2024-05-15 16:54:48.955204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.955244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.965877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190edd58 00:33:41.803 [2024-05-15 16:54:48.967890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.967921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.976608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ec408 00:33:41.803 [2024-05-15 16:54:48.977519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.977546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:48.990649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e73e0 00:33:41.803 [2024-05-15 16:54:48.991716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:48.991749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:49.004906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e5658 00:33:41.803 [2024-05-15 16:54:49.006727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:49.006758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.803 [2024-05-15 16:54:49.018043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f5be8 00:33:41.803 [2024-05-15 16:54:49.020025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.803 [2024-05-15 16:54:49.020057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.031197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f6cc8 00:33:42.061 [2024-05-15 16:54:49.033422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.033450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.040257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f3e60 00:33:42.061 [2024-05-15 16:54:49.041211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.041263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.052061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fa7d8 00:33:42.061 [2024-05-15 16:54:49.053010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.053047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.066026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fcdd0 00:33:42.061 [2024-05-15 16:54:49.067188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.067227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.078995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190de8a8 00:33:42.061 [2024-05-15 16:54:49.080307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.080334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.090859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fda78 00:33:42.061 [2024-05-15 16:54:49.092128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.092159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.104819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fef90 00:33:42.061 [2024-05-15 16:54:49.106326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.061 [2024-05-15 16:54:49.106370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.061 [2024-05-15 16:54:49.117706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3060 00:33:42.061 [2024-05-15 16:54:49.119430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.129706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fe2e8 00:33:42.062 [2024-05-15 16:54:49.131296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.131337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.141429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ec408 00:33:42.062 [2024-05-15 16:54:49.142520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.142549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.154076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fe720 00:33:42.062 [2024-05-15 16:54:49.155013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.155044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.168363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f4f40 00:33:42.062 [2024-05-15 16:54:49.170346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.170372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.180071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ef6a8 00:33:42.062 [2024-05-15 16:54:49.181772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.181803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.192692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3d08 00:33:42.062 [2024-05-15 16:54:49.194171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.194202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.206864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f35f0 00:33:42.062 [2024-05-15 16:54:49.209026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.209058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.215881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1868 00:33:42.062 [2024-05-15 16:54:49.216835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.216866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.227789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190efae0 00:33:42.062 [2024-05-15 16:54:49.228726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.228756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.240837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ec840 00:33:42.062 [2024-05-15 16:54:49.241957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.241988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.253973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3d08 00:33:42.062 [2024-05-15 16:54:49.255280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.255322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.268057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ff3c8 00:33:42.062 [2024-05-15 16:54:49.269533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.269561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.062 [2024-05-15 16:54:49.280606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f57b0 00:33:42.062 [2024-05-15 16:54:49.282062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.062 [2024-05-15 16:54:49.282094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.292335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f9f68 00:33:42.380 [2024-05-15 16:54:49.293799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.293829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.305470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f3e60 00:33:42.380 [2024-05-15 16:54:49.307091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.307122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.317200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e0ea0 00:33:42.380 [2024-05-15 16:54:49.318321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.318364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.329938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f0350 00:33:42.380 [2024-05-15 16:54:49.330933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.330964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.342878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e4de8 00:33:42.380 [2024-05-15 16:54:49.344239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.344283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.355747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e1f80 00:33:42.380 [2024-05-15 16:54:49.357242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.357288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.368577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f6890 00:33:42.380 [2024-05-15 16:54:49.370076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.370108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.380315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1868 00:33:42.380 [2024-05-15 16:54:49.381779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.381811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.391891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e8d30 00:33:42.380 [2024-05-15 16:54:49.392867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.392899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.404679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fb048 00:33:42.380 [2024-05-15 16:54:49.405454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.417774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fa7d8 00:33:42.380 [2024-05-15 16:54:49.418761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.418794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.432060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3060 00:33:42.380 [2024-05-15 16:54:49.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.434068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.443944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f8a50 00:33:42.380 [2024-05-15 16:54:49.445527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.445555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.455362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e0a68 00:33:42.380 [2024-05-15 16:54:49.456861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.456892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.468670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fef90 00:33:42.380 [2024-05-15 16:54:49.470379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.470420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.480331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f3e60 00:33:42.380 [2024-05-15 16:54:49.481502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.481530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.492859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f0bc0 00:33:42.380 [2024-05-15 16:54:49.493913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.493950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.504581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f6890 00:33:42.380 [2024-05-15 16:54:49.506540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.506568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.516110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ed920 00:33:42.380 [2024-05-15 16:54:49.516984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.517014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.529082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fc560 00:33:42.380 [2024-05-15 16:54:49.530134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.530165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.540895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f4b08 00:33:42.380 [2024-05-15 16:54:49.541955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.541986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.554250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e12d8 00:33:42.380 [2024-05-15 16:54:49.555507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.380 [2024-05-15 16:54:49.555537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:42.380 [2024-05-15 16:54:49.568230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f35f0 00:33:42.675 [2024-05-15 16:54:49.569549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.569578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.581477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e01f8 00:33:42.675 [2024-05-15 16:54:49.583053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.583084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.593373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e0a68 00:33:42.675 [2024-05-15 16:54:49.594951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.594982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.606644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e4578 00:33:42.675 [2024-05-15 16:54:49.608400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.608430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.619907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ed4e8 00:33:42.675 [2024-05-15 16:54:49.621860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.621891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.633121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fa7d8 00:33:42.675 [2024-05-15 16:54:49.635177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.635208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.641914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190eee38 00:33:42.675 [2024-05-15 16:54:49.642791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.642821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.656244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e1710 00:33:42.675 [2024-05-15 16:54:49.657786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.657817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.667892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1868 00:33:42.675 [2024-05-15 16:54:49.668924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.668953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.679967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190eb328 00:33:42.675 [2024-05-15 16:54:49.680824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.680852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.693324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190eff18 00:33:42.675 [2024-05-15 16:54:49.695063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.695090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.704157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ff3c8 00:33:42.675 [2024-05-15 16:54:49.705432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.705460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.715717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ebb98 00:33:42.675 [2024-05-15 16:54:49.717045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.717072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.727519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ee190 00:33:42.675 [2024-05-15 16:54:49.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.728809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.739213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f4298 00:33:42.675 [2024-05-15 16:54:49.740453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.740481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.750833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fc998 00:33:42.675 [2024-05-15 16:54:49.752099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.752126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.675 [2024-05-15 16:54:49.761722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190de038 00:33:42.675 [2024-05-15 16:54:49.762950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.675 [2024-05-15 16:54:49.762978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.772597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190efae0 00:33:42.676 [2024-05-15 16:54:49.773389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.773417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.784212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190de8a8 00:33:42.676 [2024-05-15 16:54:49.785021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.785050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.796252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e2c28 00:33:42.676 [2024-05-15 16:54:49.797041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.797069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.808126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e7818 00:33:42.676 [2024-05-15 16:54:49.808944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.821403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e6300 00:33:42.676 [2024-05-15 16:54:49.822854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.822881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.832346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f8618 00:33:42.676 [2024-05-15 16:54:49.833273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.833301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.844084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fc128 00:33:42.676 [2024-05-15 16:54:49.844992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.845020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.855987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ebb98 00:33:42.676 [2024-05-15 16:54:49.857090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.857117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.867918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ff3c8 00:33:42.676 [2024-05-15 16:54:49.869072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.869099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.879812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190eb328 00:33:42.676 [2024-05-15 16:54:49.880907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.880934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.676 [2024-05-15 16:54:49.891699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e9e10 00:33:42.676 [2024-05-15 16:54:49.892987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.676 [2024-05-15 16:54:49.893014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.902803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fe2e8 00:33:42.934 [2024-05-15 16:54:49.904028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.904056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.913656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ebb98 00:33:42.934 [2024-05-15 16:54:49.914457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.914485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.925272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ee190 00:33:42.934 [2024-05-15 16:54:49.926118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.926145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.937263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ed4e8 00:33:42.934 [2024-05-15 16:54:49.938244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.938271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.949119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fef90 00:33:42.934 [2024-05-15 16:54:49.950055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.950083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.960987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190df550 00:33:42.934 [2024-05-15 16:54:49.961959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.972804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e9e10 00:33:42.934 [2024-05-15 16:54:49.973774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.973801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.984489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1430 00:33:42.934 [2024-05-15 16:54:49.985417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.934 [2024-05-15 16:54:49.985445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:42.934 [2024-05-15 16:54:49.996441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190de8a8 00:33:42.934 [2024-05-15 16:54:49.997506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:49.997534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.008507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fd640 00:33:42.935 [2024-05-15 16:54:50.009955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.009986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.021863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e5658 00:33:42.935 [2024-05-15 16:54:50.023138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.023167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.033934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e9168 00:33:42.935 [2024-05-15 16:54:50.035376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.035404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.045101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e6738 00:33:42.935 [2024-05-15 16:54:50.046472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.046500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.055986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f6cc8 00:33:42.935 [2024-05-15 16:54:50.057057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.057086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.068349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190df118 00:33:42.935 [2024-05-15 16:54:50.069282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.069313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.080168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190efae0 00:33:42.935 [2024-05-15 16:54:50.081210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.081244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.092203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fdeb0 00:33:42.935 [2024-05-15 16:54:50.093049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.093078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.105749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ecc78 00:33:42.935 [2024-05-15 16:54:50.107523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.107551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.118182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ed4e8 00:33:42.935 [2024-05-15 16:54:50.120197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.120240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.126644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f9f68 00:33:42.935 [2024-05-15 16:54:50.127418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.127446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.137747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e5658 00:33:42.935 [2024-05-15 16:54:50.138512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.138539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:42.935 [2024-05-15 16:54:50.151043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e88f8 00:33:42.935 [2024-05-15 16:54:50.152027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.935 [2024-05-15 16:54:50.152055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.163154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e6300 00:33:43.193 [2024-05-15 16:54:50.164341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.164370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.174266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1430 00:33:43.193 [2024-05-15 16:54:50.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.186480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1868 00:33:43.193 [2024-05-15 16:54:50.187857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.187886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.197667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fbcf0 00:33:43.193 [2024-05-15 16:54:50.198456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.198483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.209430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e5220 00:33:43.193 [2024-05-15 16:54:50.210061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.210088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.221306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ec840 00:33:43.193 [2024-05-15 16:54:50.222266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.222293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.233127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e95a0 00:33:43.193 [2024-05-15 16:54:50.234133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.193 [2024-05-15 16:54:50.234160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.193 [2024-05-15 16:54:50.245164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fd640 00:33:43.194 [2024-05-15 16:54:50.245993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.246021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.257123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e1710 00:33:43.194 [2024-05-15 16:54:50.258249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.258276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.269038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f4298 00:33:43.194 [2024-05-15 16:54:50.270029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.270056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.280950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3d08 00:33:43.194 [2024-05-15 16:54:50.282256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.282283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.291792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ed920 00:33:43.194 [2024-05-15 16:54:50.293579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.293607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.301775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190df988 00:33:43.194 [2024-05-15 16:54:50.302521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.302549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.313991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ea680 00:33:43.194 [2024-05-15 16:54:50.314948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.314975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.327129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f6020 00:33:43.194 [2024-05-15 16:54:50.328291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.328319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.338981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e4578 00:33:43.194 [2024-05-15 16:54:50.340155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.340182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.352337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fc560 00:33:43.194 [2024-05-15 16:54:50.354078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.363304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190dece0 00:33:43.194 [2024-05-15 16:54:50.364541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.364569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.373924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190edd58 00:33:43.194 [2024-05-15 16:54:50.375678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.375706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.384755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e5658 00:33:43.194 [2024-05-15 16:54:50.385533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.385560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.396918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e4578 00:33:43.194 [2024-05-15 16:54:50.397936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.397964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.194 [2024-05-15 16:54:50.408223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e0630 00:33:43.194 [2024-05-15 16:54:50.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.194 [2024-05-15 16:54:50.409230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.453 [2024-05-15 16:54:50.420666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e84c0 00:33:43.453 [2024-05-15 16:54:50.421850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.453 [2024-05-15 16:54:50.421878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.453 [2024-05-15 16:54:50.433922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fd208 00:33:43.453 [2024-05-15 16:54:50.435231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.453 [2024-05-15 16:54:50.435259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.453 [2024-05-15 16:54:50.445958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fe720 00:33:43.453 [2024-05-15 16:54:50.447400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.453 [2024-05-15 16:54:50.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.453 [2024-05-15 16:54:50.457945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f96f8 00:33:43.453 [2024-05-15 16:54:50.459368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.453 [2024-05-15 16:54:50.459397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.453 [2024-05-15 16:54:50.468579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190df550 00:33:43.453 [2024-05-15 16:54:50.470675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.470703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.478885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e7c50 00:33:43.454 [2024-05-15 16:54:50.479883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.479911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.492076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f92c0 00:33:43.454 [2024-05-15 16:54:50.493191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.493225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.504057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f6020 00:33:43.454 [2024-05-15 16:54:50.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.505144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.516118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f81e0 00:33:43.454 [2024-05-15 16:54:50.517361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.517390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.527415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e5220 00:33:43.454 [2024-05-15 16:54:50.528712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.528747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.539707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f2948 00:33:43.454 [2024-05-15 16:54:50.541128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.541157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.552151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e7818 00:33:43.454 [2024-05-15 16:54:50.553766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.553795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.564553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ee5c8 00:33:43.454 [2024-05-15 16:54:50.566275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.566303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.576700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e01f8 00:33:43.454 [2024-05-15 16:54:50.578486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.578524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.588806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1ca0 00:33:43.454 [2024-05-15 16:54:50.590777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.597064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fb8b8 00:33:43.454 [2024-05-15 16:54:50.598008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.598036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.609090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f3a28 00:33:43.454 [2024-05-15 16:54:50.610026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.620790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e49b0 00:33:43.454 [2024-05-15 16:54:50.621767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.632580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fac10 00:33:43.454 [2024-05-15 16:54:50.633511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.633539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.644274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f31b8 00:33:43.454 [2024-05-15 16:54:50.645180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.645207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.656134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190eaef0 00:33:43.454 [2024-05-15 16:54:50.656869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.656897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.454 [2024-05-15 16:54:50.669417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190ddc00 00:33:43.454 [2024-05-15 16:54:50.671034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.454 [2024-05-15 16:54:50.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.681623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fb480 00:33:43.713 [2024-05-15 16:54:50.683456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.683484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.692400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fd640 00:33:43.713 [2024-05-15 16:54:50.693769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.693797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.702830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f5378 00:33:43.713 [2024-05-15 16:54:50.704904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.704933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.713870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e6b70 00:33:43.713 [2024-05-15 16:54:50.714803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.714830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.725748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f8618 00:33:43.713 [2024-05-15 16:54:50.726702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.737454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f2d80 00:33:43.713 [2024-05-15 16:54:50.738358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.738386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.749175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f1868 00:33:43.713 [2024-05-15 16:54:50.750084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.750112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.760879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f0bc0 00:33:43.713 [2024-05-15 16:54:50.761886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.761913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.772661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e3498 00:33:43.713 [2024-05-15 16:54:50.773657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.773685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.784352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e1b48 00:33:43.713 [2024-05-15 16:54:50.785247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.785275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.796097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190f2948 00:33:43.713 [2024-05-15 16:54:50.797044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.797072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.807893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e88f8 00:33:43.713 [2024-05-15 16:54:50.808837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.808865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.819634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e6fa8 00:33:43.713 [2024-05-15 16:54:50.820542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.820569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.831325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190e23b8 00:33:43.713 [2024-05-15 16:54:50.832285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.832319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 [2024-05-15 16:54:50.843136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb45c70) with pdu=0x2000190fb480 00:33:43.713 [2024-05-15 16:54:50.844104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-05-15 16:54:50.844131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.713 00:33:43.713 Latency(us) 00:33:43.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.713 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.713 nvme0n1 : 2.00 21157.94 82.65 0.00 0.00 6039.75 2500.08 15049.01 00:33:43.713 =================================================================================================================== 00:33:43.713 Total : 21157.94 82.65 0.00 0.00 6039.75 2500.08 15049.01 00:33:43.713 0 00:33:43.713 16:54:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:43.713 16:54:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:43.713 16:54:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:43.713 | .driver_specific 00:33:43.713 | .nvme_error 00:33:43.713 | .status_code 00:33:43.713 | .command_transient_transport_error' 00:33:43.713 16:54:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1933209 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1933209 ']' 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1933209 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1933209 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1933209' 00:33:43.971 killing process with pid 1933209 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1933209 00:33:43.971 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.971 00:33:43.971 Latency(us) 00:33:43.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.971 =================================================================================================================== 00:33:43.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.971 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1933209 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1933619 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1933619 /var/tmp/bperf.sock 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1933619 ']' 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:44.229 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.229 [2024-05-15 16:54:51.421427] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:44.229 [2024-05-15 16:54:51.421517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933619 ] 00:33:44.229 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.229 Zero copy mechanism will not be used. 00:33:44.229 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.487 [2024-05-15 16:54:51.492172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.487 [2024-05-15 16:54:51.576651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.487 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:44.487 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:44.487 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.487 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.745 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:44.745 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.745 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.745 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.745 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.745 16:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.314 nvme0n1 00:33:45.314 16:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:45.314 16:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.314 16:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.314 16:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.314 16:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:45.314 16:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.314 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.314 Zero copy mechanism will not be used. 00:33:45.314 Running I/O for 2 seconds... 00:33:45.314 [2024-05-15 16:54:52.473151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.473583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.473625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.314 [2024-05-15 16:54:52.483928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.484297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.484328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.314 [2024-05-15 16:54:52.494361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.494741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.314 [2024-05-15 16:54:52.503565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.503914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.503945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.314 [2024-05-15 16:54:52.512619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.512972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.513004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.314 [2024-05-15 16:54:52.522580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.522928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.522960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.314 [2024-05-15 16:54:52.532240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.314 [2024-05-15 16:54:52.532627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.314 [2024-05-15 16:54:52.532659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.542157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.542498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.542544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.552638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.552987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.553034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.563678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.564019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.564064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.572878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.573270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.573297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.582152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.582526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.582570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.591520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.591859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.591902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.600516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.600875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.600904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.609903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.610314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.610342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.618948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.619322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.619366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.627324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.627716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.627769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.636398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.636570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.636598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.646449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.646780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.646808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.656011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.656357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.665626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.665980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.666026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.675384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.675749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.675793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.684736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.684950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.684978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.694609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.694949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.694977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.703898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.704241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.704285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.713864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.714199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.714241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.723459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.723791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.723819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.732964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.733343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.733372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.743665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.744019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.744046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.753339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.753702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.753734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.762767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.763109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.763136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.771857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.781231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.781561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.781600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.573 [2024-05-15 16:54:52.791419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.573 [2024-05-15 16:54:52.791743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.573 [2024-05-15 16:54:52.791771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.801570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.801912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.801940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.811055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.811380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.811408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.821478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.821868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.821898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.831506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.831877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.840466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.840861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.840892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.850827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.851223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.861002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.861349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.861378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.871285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.871653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.871695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.881447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.881802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.881835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.890648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.890981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.891008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.900348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.900687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.900734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.909819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.910136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.910178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.919358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.919540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.919568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.929246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.929616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.929662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.839 [2024-05-15 16:54:52.938324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.839 [2024-05-15 16:54:52.938688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.839 [2024-05-15 16:54:52.938731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:52.948000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:52.948390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:52.948432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:52.956500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:52.956670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:52.956698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:52.965109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:52.965343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:52.965371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:52.974910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:52.975281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:52.975309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:52.984323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:52.984424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:52.984455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:52.994044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:52.994377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:52.994406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.003881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.004236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.004279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.013544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.013913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.023088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.023422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.023450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.032820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.033148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.033191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.042477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.042810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.042837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.052338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.052664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.840 [2024-05-15 16:54:53.061428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:45.840 [2024-05-15 16:54:53.061768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.840 [2024-05-15 16:54:53.061810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.070067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.070201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.070237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.079126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.079491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.079535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.088833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.089191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.089225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.098817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.099187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.099237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.108961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.109310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.109339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.119026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.119373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.119401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.128542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.128874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.128922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.138414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.138758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.138786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.147920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.148293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.148335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.157889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.158279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.158322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.168348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.168677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.168705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.178078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.178424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.178452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.189784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.097 [2024-05-15 16:54:53.190158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.097 [2024-05-15 16:54:53.190199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.097 [2024-05-15 16:54:53.199594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.199971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.200002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.209843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.210233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.210280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.219185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.219560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.219604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.229511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.229864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.229892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.237993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.238364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.238407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.247377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.247748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.247776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.256710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.257046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.257088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.266455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.266788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.266815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.275832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.276157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.276200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.285664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.286020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.286048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.295745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.296071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.296099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.305822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.306155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.306181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.098 [2024-05-15 16:54:53.316191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.098 [2024-05-15 16:54:53.316553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.098 [2024-05-15 16:54:53.316581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.325713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.326062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.326107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.335456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.335769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.344968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.345351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.345378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.354825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.355170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.355211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.365052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.365238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.365265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.374529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.374875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.374903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.384543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.384862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.384907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.394486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.394842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.394870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.404188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.404532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.404575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.413958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.414308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.414336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.424021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.424383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.424426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.434358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.434713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.356 [2024-05-15 16:54:53.434740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.356 [2024-05-15 16:54:53.444243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.356 [2024-05-15 16:54:53.444559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.444587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.453919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.454291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.454335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.463227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.463571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.463598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.472837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.473179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.473226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.482886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.483252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.483292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.492042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.492143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.492171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.500848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.501208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.501243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.510659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.511000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.519806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.520151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.529209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.529531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.529559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.539305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.539632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.539660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.547905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.548302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.548331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.557614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.557948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.557975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.567318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.567661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.567689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.357 [2024-05-15 16:54:53.577774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.357 [2024-05-15 16:54:53.578208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.357 [2024-05-15 16:54:53.578248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.587196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.587517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.587545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.596826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.597203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.597243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.606181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.606503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.606532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.616064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.616412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.616439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.626153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.626521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.626563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.635533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.635877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.635909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.645410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.645747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.645774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.656405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.656731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.656774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.666653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.666991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.667034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.677053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.677424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.677452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.687281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.687595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.687623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.697524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.697851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.697878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.707417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.707742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.707769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.716826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.716979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.717007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.727868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.728243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.728286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.737934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.738249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.738278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.615 [2024-05-15 16:54:53.748377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.615 [2024-05-15 16:54:53.748741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.615 [2024-05-15 16:54:53.748782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.758810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.759150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.759178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.769006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.769393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.769421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.778657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.779005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.779032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.789078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.789425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.799271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.799589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.799617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.808372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.808527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.808554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.818503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.818847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.818874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.828528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.828888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.828930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.616 [2024-05-15 16:54:53.838629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.616 [2024-05-15 16:54:53.838956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.616 [2024-05-15 16:54:53.838985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.848325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.848703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.848744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.858103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.858436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.858475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.867277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.867506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.867534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.876447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.876827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.876858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.886294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.886622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.886650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.895931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.896281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.896315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.904806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.905167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.905193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.914186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.914530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.914558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.923372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.923721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.923750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.933125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.933442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.933480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.942011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.942369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.942398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.951927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.952073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.952101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.961783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.962119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.962146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.971323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.971639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.874 [2024-05-15 16:54:53.971666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.874 [2024-05-15 16:54:53.981842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.874 [2024-05-15 16:54:53.982155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:53.982183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:53.991696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:53.992025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:53.992052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.001178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.001507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.001539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.011393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.011741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.011784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.020847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.021194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.021247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.030305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.030672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.030714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.039531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.039887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.039928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.049254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.049597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.049641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.057992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.058458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.058505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.067179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.067542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.067587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.077130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.077484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.077531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.087149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.087461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.087489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.875 [2024-05-15 16:54:54.097444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:46.875 [2024-05-15 16:54:54.097756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.875 [2024-05-15 16:54:54.097783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.133 [2024-05-15 16:54:54.106266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.133 [2024-05-15 16:54:54.106608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.133 [2024-05-15 16:54:54.106636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.133 [2024-05-15 16:54:54.114799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.133 [2024-05-15 16:54:54.115116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.133 [2024-05-15 16:54:54.115143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.133 [2024-05-15 16:54:54.124852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.133 [2024-05-15 16:54:54.125180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.125227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.134488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.134818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.134846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.144049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.144197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.144231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.153888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.154306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.154348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.164665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.165019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.165048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.173969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.174117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.174144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.182714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.183058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.183085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.191751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.191947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.191974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.201509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.201851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.201878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.210987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.211358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.211400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.221851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.222189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.222223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.232776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.233139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.233185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.242966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.243285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.243313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.252641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.252978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.253021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.262865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.263208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.263242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.272774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.273151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.273182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.282009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.282399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.282427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.291579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.291904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.291947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.134 [2024-05-15 16:54:54.301647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.134 [2024-05-15 16:54:54.301990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.134 [2024-05-15 16:54:54.302017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.135 [2024-05-15 16:54:54.310960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.135 [2024-05-15 16:54:54.311279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.135 [2024-05-15 16:54:54.311315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.135 [2024-05-15 16:54:54.319940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.135 [2024-05-15 16:54:54.320298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.135 [2024-05-15 16:54:54.320326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.135 [2024-05-15 16:54:54.329517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.135 [2024-05-15 16:54:54.329837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.135 [2024-05-15 16:54:54.329864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.135 [2024-05-15 16:54:54.339003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.135 [2024-05-15 16:54:54.339376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.135 [2024-05-15 16:54:54.339421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.135 [2024-05-15 16:54:54.349612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.135 [2024-05-15 16:54:54.349952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.135 [2024-05-15 16:54:54.349980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.135 [2024-05-15 16:54:54.358372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.135 [2024-05-15 16:54:54.358702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.135 [2024-05-15 16:54:54.358730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.367503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.367842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.367869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.376596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.376754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.376780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.386356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.386683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.386710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.396702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.397073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.397100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.407194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.407529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.407557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.418123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.418475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.418503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.428300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.428613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.428639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.437874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.438263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.438304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.447457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.447773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.447800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.457094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.457455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.457498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.393 [2024-05-15 16:54:54.466339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb47300) with pdu=0x2000190fef90 00:33:47.393 [2024-05-15 16:54:54.466576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.393 [2024-05-15 16:54:54.466603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.393 00:33:47.393 Latency(us) 00:33:47.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.393 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:47.393 nvme0n1 : 2.00 3189.75 398.72 0.00 0.00 5003.88 3713.71 11019.76 00:33:47.393 =================================================================================================================== 00:33:47.393 Total : 3189.75 398.72 0.00 0.00 5003.88 3713.71 11019.76 00:33:47.393 0 00:33:47.393 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.393 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.393 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:47.394 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.394 | .driver_specific 00:33:47.394 | .nvme_error 00:33:47.394 | .status_code 00:33:47.394 | .command_transient_transport_error' 00:33:47.651 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:33:47.651 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1933619 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1933619 ']' 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1933619 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1933619 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1933619' 00:33:47.652 killing process with pid 1933619 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1933619 00:33:47.652 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.652 00:33:47.652 Latency(us) 00:33:47.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.652 =================================================================================================================== 00:33:47.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.652 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1933619 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1932260 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1932260 ']' 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1932260 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1932260 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1932260' 00:33:47.909 killing process with pid 1932260 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1932260 00:33:47.909 [2024-05-15 16:54:54.986992] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:47.909 16:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1932260 00:33:48.166 00:33:48.166 real 0m15.101s 00:33:48.166 user 0m30.019s 00:33:48.166 sys 0m4.040s 00:33:48.166 16:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:48.166 16:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.166 ************************************ 00:33:48.166 END TEST nvmf_digest_error 00:33:48.167 ************************************ 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:48.167 rmmod nvme_tcp 00:33:48.167 rmmod nvme_fabrics 00:33:48.167 rmmod nvme_keyring 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1932260 ']' 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1932260 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1932260 ']' 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1932260 00:33:48.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1932260) - No such process 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1932260 is not found' 00:33:48.167 Process with pid 1932260 is not found 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.167 16:54:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.696 16:54:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:50.696 00:33:50.696 real 0m35.270s 00:33:50.696 user 1m1.444s 00:33:50.696 sys 0m10.017s 00:33:50.696 16:54:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:50.696 16:54:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:50.696 ************************************ 00:33:50.696 END TEST nvmf_digest 00:33:50.696 ************************************ 00:33:50.696 16:54:57 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:33:50.696 16:54:57 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:33:50.696 16:54:57 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:33:50.696 16:54:57 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:50.696 16:54:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:50.696 16:54:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:50.696 16:54:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.696 ************************************ 00:33:50.696 START TEST nvmf_bdevperf 00:33:50.696 ************************************ 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:50.696 * Looking for test storage... 00:33:50.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.696 16:54:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.697 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:50.697 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:50.697 16:54:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.697 16:54:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:53.224 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:53.224 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:53.224 Found net devices under 0000:09:00.0: cvl_0_0 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:53.224 Found net devices under 0000:09:00.1: cvl_0_1 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:53.224 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:53.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:33:53.225 00:33:53.225 --- 10.0.0.2 ping statistics --- 00:33:53.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.225 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:33:53.225 00:33:53.225 --- 10.0.0.1 ping statistics --- 00:33:53.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.225 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1936385 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1936385 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1936385 ']' 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.225 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.225 [2024-05-15 16:55:00.230469] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:53.225 [2024-05-15 16:55:00.230570] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.225 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.225 [2024-05-15 16:55:00.309698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:53.225 [2024-05-15 16:55:00.397068] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.225 [2024-05-15 16:55:00.397127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.225 [2024-05-15 16:55:00.397140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.225 [2024-05-15 16:55:00.397151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.225 [2024-05-15 16:55:00.397161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.225 [2024-05-15 16:55:00.397249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.225 [2024-05-15 16:55:00.397316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.225 [2024-05-15 16:55:00.397318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.491 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.491 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:53.491 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.491 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.492 [2024-05-15 16:55:00.539180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.492 Malloc0 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.492 [2024-05-15 16:55:00.599934] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:53.492 [2024-05-15 16:55:00.600302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:53.492 { 00:33:53.492 "params": { 00:33:53.492 "name": "Nvme$subsystem", 00:33:53.492 "trtype": "$TEST_TRANSPORT", 00:33:53.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.492 "adrfam": "ipv4", 00:33:53.492 "trsvcid": "$NVMF_PORT", 00:33:53.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.492 "hdgst": ${hdgst:-false}, 00:33:53.492 "ddgst": ${ddgst:-false} 00:33:53.492 }, 00:33:53.492 "method": "bdev_nvme_attach_controller" 00:33:53.492 } 00:33:53.492 EOF 00:33:53.492 )") 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:53.492 16:55:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:53.492 "params": { 00:33:53.492 "name": "Nvme1", 00:33:53.492 "trtype": "tcp", 00:33:53.492 "traddr": "10.0.0.2", 00:33:53.492 "adrfam": "ipv4", 00:33:53.492 "trsvcid": "4420", 00:33:53.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.492 "hdgst": false, 00:33:53.492 "ddgst": false 00:33:53.492 }, 00:33:53.492 "method": "bdev_nvme_attach_controller" 00:33:53.492 }' 00:33:53.492 [2024-05-15 16:55:00.644033] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:53.492 [2024-05-15 16:55:00.644121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936411 ] 00:33:53.492 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.492 [2024-05-15 16:55:00.713902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.751 [2024-05-15 16:55:00.800686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.008 Running I/O for 1 seconds... 00:33:54.940 00:33:54.940 Latency(us) 00:33:54.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:54.940 Verification LBA range: start 0x0 length 0x4000 00:33:54.940 Nvme1n1 : 1.01 8643.82 33.76 0.00 0.00 14748.83 3094.76 13592.65 00:33:54.940 =================================================================================================================== 00:33:54.940 Total : 8643.82 33.76 0.00 0.00 14748.83 3094.76 13592.65 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1936672 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:55.197 { 00:33:55.197 "params": { 00:33:55.197 "name": "Nvme$subsystem", 00:33:55.197 "trtype": "$TEST_TRANSPORT", 00:33:55.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.197 "adrfam": "ipv4", 00:33:55.197 "trsvcid": "$NVMF_PORT", 00:33:55.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.197 "hdgst": ${hdgst:-false}, 00:33:55.197 "ddgst": ${ddgst:-false} 00:33:55.197 }, 00:33:55.197 "method": "bdev_nvme_attach_controller" 00:33:55.197 } 00:33:55.197 EOF 00:33:55.197 )") 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:55.197 16:55:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:55.197 "params": { 00:33:55.197 "name": "Nvme1", 00:33:55.197 "trtype": "tcp", 00:33:55.197 "traddr": "10.0.0.2", 00:33:55.197 "adrfam": "ipv4", 00:33:55.197 "trsvcid": "4420", 00:33:55.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.197 "hdgst": false, 00:33:55.197 "ddgst": false 00:33:55.197 }, 00:33:55.197 "method": "bdev_nvme_attach_controller" 00:33:55.197 }' 00:33:55.197 [2024-05-15 16:55:02.301586] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:33:55.197 [2024-05-15 16:55:02.301675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936672 ] 00:33:55.197 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.197 [2024-05-15 16:55:02.369748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.454 [2024-05-15 16:55:02.453804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.711 Running I/O for 15 seconds... 00:33:58.236 16:55:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1936385 00:33:58.236 16:55:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:58.236 [2024-05-15 16:55:05.274085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.236 [2024-05-15 16:55:05.274451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.236 [2024-05-15 16:55:05.274469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.237 [2024-05-15 16:55:05.274765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.274800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.274839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.274874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.274909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.274944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.274978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.274996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.237 [2024-05-15 16:55:05.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.237 [2024-05-15 16:55:05.275900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.275916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.275933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.275949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.275966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.275982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.275999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.238 [2024-05-15 16:55:05.276684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.276965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.276981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.238 [2024-05-15 16:55:05.277308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.238 [2024-05-15 16:55:05.277323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.239 [2024-05-15 16:55:05.277772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.277805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.277838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.277874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.277908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.277940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.277973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.277990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.239 [2024-05-15 16:55:05.278768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x994010 is same with the state(5) to be set 00:33:58.239 [2024-05-15 16:55:05.278804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.239 [2024-05-15 16:55:05.278817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.239 [2024-05-15 16:55:05.278829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38016 len:8 PRP1 0x0 PRP2 0x0 00:33:58.239 [2024-05-15 16:55:05.278844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.278908] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x994010 was disconnected and freed. reset controller. 00:33:58.239 [2024-05-15 16:55:05.278982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.239 [2024-05-15 16:55:05.279006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.239 [2024-05-15 16:55:05.279023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.239 [2024-05-15 16:55:05.279038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.240 [2024-05-15 16:55:05.279059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.240 [2024-05-15 16:55:05.279076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.240 [2024-05-15 16:55:05.279091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.240 [2024-05-15 16:55:05.279107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.240 [2024-05-15 16:55:05.279122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.282787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.282827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.283477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.283626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.283652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.283669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.283914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.284161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.284186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.284204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.287901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.296959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.297374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.297506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.297532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.297549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.297788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.298034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.298058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.298073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.301735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.310985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.311382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.311528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.311557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.311580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.311824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.312070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.312094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.312110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.315777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.324991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.325414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.325564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.325592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.325609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.325866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.326112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.326136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.326152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.329770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.338928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.339344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.340470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.340506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.340525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.340769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.341016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.341040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.341057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.344674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.352991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.353422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.353595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.353621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.353652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.353892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.354149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.354174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.354190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.357751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.366978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.367372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.367525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.367554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.367572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.367813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.368059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.368084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.368099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.371762] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.381020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.381408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.381550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.381578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.381596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.381861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.382097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.382117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.240 [2024-05-15 16:55:05.382131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.240 [2024-05-15 16:55:05.385677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.240 [2024-05-15 16:55:05.394465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.240 [2024-05-15 16:55:05.394876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.395072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.240 [2024-05-15 16:55:05.395098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.240 [2024-05-15 16:55:05.395115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.240 [2024-05-15 16:55:05.395387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.240 [2024-05-15 16:55:05.395613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.240 [2024-05-15 16:55:05.395633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.241 [2024-05-15 16:55:05.395646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.241 [2024-05-15 16:55:05.398719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.241 [2024-05-15 16:55:05.408346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.241 [2024-05-15 16:55:05.408760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.408987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.409029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.241 [2024-05-15 16:55:05.409048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.241 [2024-05-15 16:55:05.409305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.241 [2024-05-15 16:55:05.409529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.241 [2024-05-15 16:55:05.409549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.241 [2024-05-15 16:55:05.409579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.241 [2024-05-15 16:55:05.413166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.241 [2024-05-15 16:55:05.422316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.241 [2024-05-15 16:55:05.422803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.422960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.423001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.241 [2024-05-15 16:55:05.423020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.241 [2024-05-15 16:55:05.423270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.241 [2024-05-15 16:55:05.423516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.241 [2024-05-15 16:55:05.423540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.241 [2024-05-15 16:55:05.423556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.241 [2024-05-15 16:55:05.427224] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.241 [2024-05-15 16:55:05.436424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.241 [2024-05-15 16:55:05.436865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.437014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.437056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.241 [2024-05-15 16:55:05.437075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.241 [2024-05-15 16:55:05.437337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.241 [2024-05-15 16:55:05.437588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.241 [2024-05-15 16:55:05.437618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.241 [2024-05-15 16:55:05.437635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.241 [2024-05-15 16:55:05.441312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.241 [2024-05-15 16:55:05.450401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.241 [2024-05-15 16:55:05.450803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.450964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.241 [2024-05-15 16:55:05.450991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.241 [2024-05-15 16:55:05.451009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.241 [2024-05-15 16:55:05.451264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.241 [2024-05-15 16:55:05.451509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.241 [2024-05-15 16:55:05.451533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.241 [2024-05-15 16:55:05.451548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.241 [2024-05-15 16:55:05.455008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.539 [2024-05-15 16:55:05.463915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.539 [2024-05-15 16:55:05.464291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.464425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.464452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.539 [2024-05-15 16:55:05.464468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.539 [2024-05-15 16:55:05.464691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.539 [2024-05-15 16:55:05.464900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.539 [2024-05-15 16:55:05.464920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.539 [2024-05-15 16:55:05.464934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.539 [2024-05-15 16:55:05.468013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.539 [2024-05-15 16:55:05.478123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.539 [2024-05-15 16:55:05.478566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.478765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.478792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.539 [2024-05-15 16:55:05.478809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.539 [2024-05-15 16:55:05.479069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.539 [2024-05-15 16:55:05.479339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.539 [2024-05-15 16:55:05.479366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.539 [2024-05-15 16:55:05.479388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.539 [2024-05-15 16:55:05.483109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.539 [2024-05-15 16:55:05.492271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.539 [2024-05-15 16:55:05.492680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.492869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.492898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.539 [2024-05-15 16:55:05.492916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.539 [2024-05-15 16:55:05.493157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.539 [2024-05-15 16:55:05.493423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.539 [2024-05-15 16:55:05.493447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.539 [2024-05-15 16:55:05.493463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.539 [2024-05-15 16:55:05.497161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.539 [2024-05-15 16:55:05.506320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.539 [2024-05-15 16:55:05.506757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.506916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.506943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.539 [2024-05-15 16:55:05.506960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.539 [2024-05-15 16:55:05.507239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.539 [2024-05-15 16:55:05.507507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.539 [2024-05-15 16:55:05.507532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.539 [2024-05-15 16:55:05.507548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.539 [2024-05-15 16:55:05.511211] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.539 [2024-05-15 16:55:05.520377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.539 [2024-05-15 16:55:05.520798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.520954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.539 [2024-05-15 16:55:05.520983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.539 [2024-05-15 16:55:05.521001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.539 [2024-05-15 16:55:05.521253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.539 [2024-05-15 16:55:05.521499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.521524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.521539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.525145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.534319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.534825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.534985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.535015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.535034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.535287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.535533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.535558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.535574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.539180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.548341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.548833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.548990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.549019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.549037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.549287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.549533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.549557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.549573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.553182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.562346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.562762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.562932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.562958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.562975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.563227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.563474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.563498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.563513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.567116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.576277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.576715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.576938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.576996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.577014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.577266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.577513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.577537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.577553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.581159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.590320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.590732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.590930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.590956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.590972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.591236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.591483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.591507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.591524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.595132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.604294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.604725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.604925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.604969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.604987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.605239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.605485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.605510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.605525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.609131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.618295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.618710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.618901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.618928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.618944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.619197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.619453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.619478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.619494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.623098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.632251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.632686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.632844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.632870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.632886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.633131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.633388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.633413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.633429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.637034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.646189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.540 [2024-05-15 16:55:05.646615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.646841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.540 [2024-05-15 16:55:05.646868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.540 [2024-05-15 16:55:05.646900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.540 [2024-05-15 16:55:05.647147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.540 [2024-05-15 16:55:05.647402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.540 [2024-05-15 16:55:05.647426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.540 [2024-05-15 16:55:05.647442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.540 [2024-05-15 16:55:05.651045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.540 [2024-05-15 16:55:05.660206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.660648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.660792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.660818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.660854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.661097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.661354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.661379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.661395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.541 [2024-05-15 16:55:05.665000] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.541 [2024-05-15 16:55:05.674147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.674559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.674706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.674735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.674753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.674994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.675250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.675274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.675290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.541 [2024-05-15 16:55:05.678895] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.541 [2024-05-15 16:55:05.688047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.688471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.688636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.688662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.688678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.688950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.689196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.689228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.689246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.541 [2024-05-15 16:55:05.692853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.541 [2024-05-15 16:55:05.702007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.702569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.702885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.702937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.702955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.703202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.703457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.703481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.703497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.541 [2024-05-15 16:55:05.707099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.541 [2024-05-15 16:55:05.716055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.716558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.716705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.716748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.716766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.717008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.717262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.717287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.717303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.541 [2024-05-15 16:55:05.720910] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.541 [2024-05-15 16:55:05.730180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.730606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.730814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.730864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.730884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.731142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.731416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.731443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.731459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.541 [2024-05-15 16:55:05.735121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.541 [2024-05-15 16:55:05.744069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.541 [2024-05-15 16:55:05.744511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.744718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.541 [2024-05-15 16:55:05.744744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.541 [2024-05-15 16:55:05.744760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.541 [2024-05-15 16:55:05.745028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.541 [2024-05-15 16:55:05.745289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.541 [2024-05-15 16:55:05.745314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.541 [2024-05-15 16:55:05.745330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.800 [2024-05-15 16:55:05.748972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.800 [2024-05-15 16:55:05.758129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.800 [2024-05-15 16:55:05.758563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.800 [2024-05-15 16:55:05.758732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.800 [2024-05-15 16:55:05.758761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.800 [2024-05-15 16:55:05.758778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.800 [2024-05-15 16:55:05.759020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.800 [2024-05-15 16:55:05.759278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.800 [2024-05-15 16:55:05.759303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.800 [2024-05-15 16:55:05.759319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.800 [2024-05-15 16:55:05.762925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.800 [2024-05-15 16:55:05.772104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.800 [2024-05-15 16:55:05.772516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.800 [2024-05-15 16:55:05.772699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.800 [2024-05-15 16:55:05.772728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.800 [2024-05-15 16:55:05.772746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.800 [2024-05-15 16:55:05.772987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.800 [2024-05-15 16:55:05.773243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.800 [2024-05-15 16:55:05.773268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.800 [2024-05-15 16:55:05.773284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.800 [2024-05-15 16:55:05.776890] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.800 [2024-05-15 16:55:05.786051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.800 [2024-05-15 16:55:05.786518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.800 [2024-05-15 16:55:05.786790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.786820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.786838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.787079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.787335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.787365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.787382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.790988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.799957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.800381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.800638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.800667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.800685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.800926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.801172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.801195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.801211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.804831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.813999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.814426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.814742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.814792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.814810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.815052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.815309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.815334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.815350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.818954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.827898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.828290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.828511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.828541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.828559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.828801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.829047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.829071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.829092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.832712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.841867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.842295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.842426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.842455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.842473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.842714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.842960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.842984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.842999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.846618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.855775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.856174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.856339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.856368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.856385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.856626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.856872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.856896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.856912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.860525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.869680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.870073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.870231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.870262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.870280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.870521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.870767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.870791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.870807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.874427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.883583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.884017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.884198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.884236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.884256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.884497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.884743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.884768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.884783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.888397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.897558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.897959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.898117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.898146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.898164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.898415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.801 [2024-05-15 16:55:05.898661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.801 [2024-05-15 16:55:05.898685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.801 [2024-05-15 16:55:05.898701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.801 [2024-05-15 16:55:05.902317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.801 [2024-05-15 16:55:05.911479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.801 [2024-05-15 16:55:05.912007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.912231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.801 [2024-05-15 16:55:05.912260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.801 [2024-05-15 16:55:05.912278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.801 [2024-05-15 16:55:05.912519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.912765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.912789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.912805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.916419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:05.925373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:05.925796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.925945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.925973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:05.925991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:05.926243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.926489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.926514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.926530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.930136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:05.939293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:05.939711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.939961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.940015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:05.940033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:05.940284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.940531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.940556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.940572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.944178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:05.953256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:05.953678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.953802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.953832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:05.953850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:05.954091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.954345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.954371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.954386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.957992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:05.967152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:05.967579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.967771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.967800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:05.967818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:05.968060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.968316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.968342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.968358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.971963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:05.981127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:05.981552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.981792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.981821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:05.981840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:05.982081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.982337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.982361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.982377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.985984] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:05.995173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:05.995597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.995759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:05.995788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:05.995806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:05.996047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:05.996303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:05.996327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:05.996343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:05.999951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:06.009107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:06.009544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:06.009724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:06.009758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:06.009777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:06.010018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:06.010281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:06.010307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:06.010323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.802 [2024-05-15 16:55:06.013931] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.802 [2024-05-15 16:55:06.023120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.802 [2024-05-15 16:55:06.023562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:06.023718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.802 [2024-05-15 16:55:06.023748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:58.802 [2024-05-15 16:55:06.023766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:58.802 [2024-05-15 16:55:06.024008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:58.802 [2024-05-15 16:55:06.024265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.802 [2024-05-15 16:55:06.024290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.802 [2024-05-15 16:55:06.024306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.061 [2024-05-15 16:55:06.027957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.061 [2024-05-15 16:55:06.037142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.061 [2024-05-15 16:55:06.037571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-05-15 16:55:06.037730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-05-15 16:55:06.037759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.037777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.038019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.038276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.038302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.038317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.041923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.051094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.051521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.051655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.051683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.051706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.051949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.052196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.052230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.052260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.055875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.065056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.065486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.065726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.065778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.065796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.066038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.066294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.066320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.066335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.069949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.079117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.079550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.079859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.079910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.079928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.080169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.080427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.080454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.080470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.084079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.093035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.093474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.093613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.093644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.093663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.093910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.094157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.094183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.094199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.097822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.106985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.107411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.107602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.107631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.107649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.107891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.108138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.108163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.108179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.111810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.120974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.121399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.121552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.121582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.121600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.121842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.122088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.122113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.122130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.125752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.134915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.135330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.135487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.135516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.135533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.135774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.136025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.136050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.136065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.139688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.062 [2024-05-15 16:55:06.148937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.062 [2024-05-15 16:55:06.149376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.149507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.062 [2024-05-15 16:55:06.149535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.062 [2024-05-15 16:55:06.149552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.062 [2024-05-15 16:55:06.149794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.062 [2024-05-15 16:55:06.150039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.062 [2024-05-15 16:55:06.150063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.062 [2024-05-15 16:55:06.150079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.062 [2024-05-15 16:55:06.153703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.162868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.163286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.163447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.163475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.163493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.163734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.163981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.164006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.164022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.167644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.176810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.177237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.177423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.177452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.177470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.177712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.177959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.177989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.178006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.181631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.190789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.191182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.191403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.191433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.191451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.191694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.191940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.191965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.191981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.195604] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.204778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.205169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.205372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.205401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.205420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.205661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.205908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.205933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.205949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.209572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.218749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.219166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.219340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.219369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.219386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.219628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.219874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.219900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.219920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.223541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.232714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.233134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.233298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.233327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.233345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.233586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.233831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.233856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.233872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.237492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.246669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.247087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.247271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.247300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.247317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.247559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.247806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.247831] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.247847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.251467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.260636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.261056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.261191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.261231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.063 [2024-05-15 16:55:06.261251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.063 [2024-05-15 16:55:06.261494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.063 [2024-05-15 16:55:06.261738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.063 [2024-05-15 16:55:06.261763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.063 [2024-05-15 16:55:06.261778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.063 [2024-05-15 16:55:06.265396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.063 [2024-05-15 16:55:06.274569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.063 [2024-05-15 16:55:06.274989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.275148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.063 [2024-05-15 16:55:06.275176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.064 [2024-05-15 16:55:06.275194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.064 [2024-05-15 16:55:06.275444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.064 [2024-05-15 16:55:06.275689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.064 [2024-05-15 16:55:06.275715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.064 [2024-05-15 16:55:06.275731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.064 [2024-05-15 16:55:06.279349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.323 [2024-05-15 16:55:06.288599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.323 [2024-05-15 16:55:06.289039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.323 [2024-05-15 16:55:06.289231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.323 [2024-05-15 16:55:06.289272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.323 [2024-05-15 16:55:06.289290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.323 [2024-05-15 16:55:06.289532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.323 [2024-05-15 16:55:06.289778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.323 [2024-05-15 16:55:06.289801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.323 [2024-05-15 16:55:06.289816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.323 [2024-05-15 16:55:06.293451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.323 [2024-05-15 16:55:06.302640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.323 [2024-05-15 16:55:06.303060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.323 [2024-05-15 16:55:06.303242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.323 [2024-05-15 16:55:06.303273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.323 [2024-05-15 16:55:06.303291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.323 [2024-05-15 16:55:06.303533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.323 [2024-05-15 16:55:06.303780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.323 [2024-05-15 16:55:06.303806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.323 [2024-05-15 16:55:06.303822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.323 [2024-05-15 16:55:06.307675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.323 [2024-05-15 16:55:06.316645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.323 [2024-05-15 16:55:06.317066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.323 [2024-05-15 16:55:06.317280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.323 [2024-05-15 16:55:06.317310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.323 [2024-05-15 16:55:06.317329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.323 [2024-05-15 16:55:06.317571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.323 [2024-05-15 16:55:06.317817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.323 [2024-05-15 16:55:06.317842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.323 [2024-05-15 16:55:06.317859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.323 [2024-05-15 16:55:06.321480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.323 [2024-05-15 16:55:06.330647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.331186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.331478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.331529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.331547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.331790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.332037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.332062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.332078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.335700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.344653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.345072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.345343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.345373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.345391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.345633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.345880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.345905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.345921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.349540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.358712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.359119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.359311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.359342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.359361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.359604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.359850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.359875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.359891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.363509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.372675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.373113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.373276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.373306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.373324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.373566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.373812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.373837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.373853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.377473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.386638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.387038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.387198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.387234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.387263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.387503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.387750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.387775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.387790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.391414] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.400598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.400993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.401151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.401181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.401204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.401457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.401703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.401728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.401743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.405364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.414549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.414953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.415135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.415165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.415183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.415433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.415680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.415705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.415721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.419340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.428532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.429063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.429244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.429274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.429292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.429534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.429780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.429805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.429820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.433455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.442424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.442955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.443147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.443176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.443194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.443450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.443697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.324 [2024-05-15 16:55:06.443721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.324 [2024-05-15 16:55:06.443737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.324 [2024-05-15 16:55:06.447363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.324 [2024-05-15 16:55:06.456332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.324 [2024-05-15 16:55:06.456838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.457052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.324 [2024-05-15 16:55:06.457081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.324 [2024-05-15 16:55:06.457099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.324 [2024-05-15 16:55:06.457352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.324 [2024-05-15 16:55:06.457599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.457623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.457639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.461266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.325 [2024-05-15 16:55:06.470250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.325 [2024-05-15 16:55:06.470666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.470874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.470903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.325 [2024-05-15 16:55:06.470922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.325 [2024-05-15 16:55:06.471163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.325 [2024-05-15 16:55:06.471422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.471447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.471463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.475080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.325 [2024-05-15 16:55:06.484272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.325 [2024-05-15 16:55:06.484767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.485020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.485049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.325 [2024-05-15 16:55:06.485067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.325 [2024-05-15 16:55:06.485318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.325 [2024-05-15 16:55:06.485571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.485595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.485611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.489237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.325 [2024-05-15 16:55:06.498212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.325 [2024-05-15 16:55:06.498738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.498995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.499027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.325 [2024-05-15 16:55:06.499046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.325 [2024-05-15 16:55:06.499300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.325 [2024-05-15 16:55:06.499547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.499571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.499587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.503199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.325 [2024-05-15 16:55:06.512164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.325 [2024-05-15 16:55:06.512601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.512757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.512786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.325 [2024-05-15 16:55:06.512804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.325 [2024-05-15 16:55:06.513045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.325 [2024-05-15 16:55:06.513302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.513327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.513343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.516957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.325 [2024-05-15 16:55:06.526137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.325 [2024-05-15 16:55:06.526570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.526715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.526744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.325 [2024-05-15 16:55:06.526762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.325 [2024-05-15 16:55:06.527003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.325 [2024-05-15 16:55:06.527261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.527292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.527309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.530937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.325 [2024-05-15 16:55:06.540119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.325 [2024-05-15 16:55:06.540562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.540718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.325 [2024-05-15 16:55:06.540748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.325 [2024-05-15 16:55:06.540766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.325 [2024-05-15 16:55:06.541007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.325 [2024-05-15 16:55:06.541262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.325 [2024-05-15 16:55:06.541287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.325 [2024-05-15 16:55:06.541303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.325 [2024-05-15 16:55:06.544909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.554145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.554591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.554757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.554785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.554803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.555045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.555311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.555337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.555353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.558968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.568147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.568536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.568763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.568821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.568839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.569081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.569344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.569369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.569391] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.573005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.582193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.582606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.582738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.582767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.582785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.583026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.583298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.583323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.583339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.586954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.596159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.596651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.596857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.596886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.596904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.597146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.597403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.597428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.597444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.601058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.610242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.610663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.610821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.610850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.610867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.611109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.611365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.611390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.611407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.615022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.624183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.624608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.624762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.624790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.624807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.625048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.625304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.625334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.625350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.628956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.638121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.638548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.638743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.638771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.638788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.639030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.639287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.639312] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.639328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.642962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.652137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.652568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.652728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.652756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.652773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.653014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.653271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.653296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.653312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.656921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.666112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.666550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.666710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.666739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.585 [2024-05-15 16:55:06.666757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.585 [2024-05-15 16:55:06.666998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.585 [2024-05-15 16:55:06.667257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.585 [2024-05-15 16:55:06.667283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.585 [2024-05-15 16:55:06.667300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.585 [2024-05-15 16:55:06.670912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.585 [2024-05-15 16:55:06.680082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.585 [2024-05-15 16:55:06.680512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.585 [2024-05-15 16:55:06.680672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.680700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.680717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.680958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.681203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.681241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.681265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.684877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.694049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.694485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.694623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.694650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.694668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.694910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.695155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.695181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.695197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.698818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.707988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.708426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.708637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.708665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.708683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.708924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.709171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.709196] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.709212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.712846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.722027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.722455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.722620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.722648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.722666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.722907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.723154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.723179] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.723196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.726818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.735987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.736420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.736607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.736635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.736653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.736895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.737142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.737167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.737183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.740807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.749970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.750403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.750534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.750567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.750586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.750827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.751072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.751097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.751113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.754735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.763897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.764310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.764491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.764556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.764575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.764817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.765063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.765089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.765105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.768726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.777893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.778287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.778446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.778474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.778491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.778733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.778978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.779004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.779020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.782642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.791817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.792227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.792410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.792439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.792462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.792705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.792950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.792975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.586 [2024-05-15 16:55:06.792991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.586 [2024-05-15 16:55:06.796615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.586 [2024-05-15 16:55:06.805786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.586 [2024-05-15 16:55:06.806184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.806326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.586 [2024-05-15 16:55:06.806356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.586 [2024-05-15 16:55:06.806375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.586 [2024-05-15 16:55:06.806617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.586 [2024-05-15 16:55:06.806870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.586 [2024-05-15 16:55:06.806896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.587 [2024-05-15 16:55:06.806912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.587 [2024-05-15 16:55:06.810572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.819790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.820206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.820375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.820405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.820423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.820665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.820912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.820937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.820953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.824576] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.833740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.834166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.834332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.834362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.834380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.834628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.834875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.834900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.834917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.838535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.847702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.848097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.848283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.848312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.848330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.848572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.848818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.848843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.848858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.852479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.861649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.862067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.862259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.862288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.862306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.862547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.862792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.862817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.862833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.866453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.875621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.876028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.876177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.876204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.876234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.876478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.876728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.876754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.876771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.880390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.889551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.889948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.890121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.890149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.890167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.890421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.890667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.890693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.890708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.894325] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.903489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.903892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.904082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.904110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.904128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.904382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.904629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.904655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.904671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.908293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.917462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.917886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.918043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.918071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.918089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.918343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.918588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.918618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.918635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.922255] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.931434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.931851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.932035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.932063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.932081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.932337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.932585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.932610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.932626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.936253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.945424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.945841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.946001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.946030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.946048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.946303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.946560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.946585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.946602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.950213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.959384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.959804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.959961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.959990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.960007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.960262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.960509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.845 [2024-05-15 16:55:06.960535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.845 [2024-05-15 16:55:06.960556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.845 [2024-05-15 16:55:06.964168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.845 [2024-05-15 16:55:06.973349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.845 [2024-05-15 16:55:06.973746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.973897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.845 [2024-05-15 16:55:06.973925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.845 [2024-05-15 16:55:06.973942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.845 [2024-05-15 16:55:06.974184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.845 [2024-05-15 16:55:06.974441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:06.974467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:06.974483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:06.978098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.846 [2024-05-15 16:55:06.987299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.846 [2024-05-15 16:55:06.987727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:06.987912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:06.987940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.846 [2024-05-15 16:55:06.987958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.846 [2024-05-15 16:55:06.988199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.846 [2024-05-15 16:55:06.988460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:06.988485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:06.988501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:06.992110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.846 [2024-05-15 16:55:07.001301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.846 [2024-05-15 16:55:07.001726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.001885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.001917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.846 [2024-05-15 16:55:07.001936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.846 [2024-05-15 16:55:07.002178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.846 [2024-05-15 16:55:07.002435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:07.002460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:07.002477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:07.006094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.846 [2024-05-15 16:55:07.015290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.846 [2024-05-15 16:55:07.015845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.016054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.016083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.846 [2024-05-15 16:55:07.016101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.846 [2024-05-15 16:55:07.016353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.846 [2024-05-15 16:55:07.016600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:07.016625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:07.016641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:07.020263] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.846 [2024-05-15 16:55:07.029228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.846 [2024-05-15 16:55:07.029655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.029812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.029841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.846 [2024-05-15 16:55:07.029859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.846 [2024-05-15 16:55:07.030099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.846 [2024-05-15 16:55:07.030357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:07.030382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:07.030397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:07.034007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.846 [2024-05-15 16:55:07.043186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.846 [2024-05-15 16:55:07.043719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.044016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.044045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.846 [2024-05-15 16:55:07.044064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.846 [2024-05-15 16:55:07.044317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.846 [2024-05-15 16:55:07.044565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:07.044590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:07.044606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:07.048229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.846 [2024-05-15 16:55:07.057179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.846 [2024-05-15 16:55:07.057655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.057812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.846 [2024-05-15 16:55:07.057841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:33:59.846 [2024-05-15 16:55:07.057860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:33:59.846 [2024-05-15 16:55:07.058101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:33:59.846 [2024-05-15 16:55:07.058357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.846 [2024-05-15 16:55:07.058382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.846 [2024-05-15 16:55:07.058398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.846 [2024-05-15 16:55:07.062012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.103 [2024-05-15 16:55:07.071230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.103 [2024-05-15 16:55:07.071642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.103 [2024-05-15 16:55:07.071827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.103 [2024-05-15 16:55:07.071859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.103 [2024-05-15 16:55:07.071877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.103 [2024-05-15 16:55:07.072119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.103 [2024-05-15 16:55:07.072376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.103 [2024-05-15 16:55:07.072401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.103 [2024-05-15 16:55:07.072416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.103 [2024-05-15 16:55:07.076047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.103 [2024-05-15 16:55:07.085241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.103 [2024-05-15 16:55:07.085667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.103 [2024-05-15 16:55:07.085820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.103 [2024-05-15 16:55:07.085848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.103 [2024-05-15 16:55:07.085867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.103 [2024-05-15 16:55:07.086108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.103 [2024-05-15 16:55:07.086366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.103 [2024-05-15 16:55:07.086391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.103 [2024-05-15 16:55:07.086407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.103 [2024-05-15 16:55:07.090016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.103 [2024-05-15 16:55:07.099189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.103 [2024-05-15 16:55:07.099611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.103 [2024-05-15 16:55:07.099773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.103 [2024-05-15 16:55:07.099802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.103 [2024-05-15 16:55:07.099819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.103 [2024-05-15 16:55:07.100060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.100318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.100343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.100359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.103974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.113151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.113555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.113740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.113769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.113787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.114028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.114288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.114313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.114329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.117939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.127112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.127635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.127902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.127931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.127950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.128191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.128448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.128473] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.128489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.132099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.141067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.141556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.141841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.141869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.141892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.142135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.142391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.142415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.142431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.146039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.154998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.155435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.155646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.155704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.155722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.155963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.156209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.156245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.156261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.159869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.169108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.169512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.169672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.169701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.169719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.169961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.170206] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.170241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.170259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.173869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.183037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.183476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.183693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.183723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.183741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.183988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.184246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.184270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.184286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.187898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.197067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.197475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.197740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.197794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.197813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.198055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.198314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.198339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.198355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.201962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.211122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.211555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.211737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.211766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.211784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.212026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.212282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.212306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.212323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.215928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.225079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.225507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.225690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.225719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.225737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.225979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.226240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.226265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.226281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.229889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.239051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.239458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.239612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.239640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.239658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.239899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.240146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.240170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.240186] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.243805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.252967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.253391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.253517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.253546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.253564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.253805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.254051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.254075] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.254091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.257707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.266864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.267282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.267440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.267469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.267487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.267728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.267974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.268007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.268024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.271640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.280798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.281222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.281362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.281391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.281409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.281650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.281896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.281920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.281936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.285548] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.294703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.295126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.295290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.295320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.295338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.295579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.295825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.295849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.295865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.299477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.308636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.309031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.309213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.309250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.309268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.309510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.309755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.309780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.309801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.313428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.104 [2024-05-15 16:55:07.322585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.104 [2024-05-15 16:55:07.323012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.323180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.104 [2024-05-15 16:55:07.323209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.104 [2024-05-15 16:55:07.323237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.104 [2024-05-15 16:55:07.323479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.104 [2024-05-15 16:55:07.323725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.104 [2024-05-15 16:55:07.323749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.104 [2024-05-15 16:55:07.323765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.104 [2024-05-15 16:55:07.327398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.363 [2024-05-15 16:55:07.336771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.363 [2024-05-15 16:55:07.337297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.363 [2024-05-15 16:55:07.337515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.363 [2024-05-15 16:55:07.337572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.363 [2024-05-15 16:55:07.337590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.363 [2024-05-15 16:55:07.337832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.363 [2024-05-15 16:55:07.338078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.363 [2024-05-15 16:55:07.338102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.363 [2024-05-15 16:55:07.338119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.363 [2024-05-15 16:55:07.341735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.363 [2024-05-15 16:55:07.350683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.363 [2024-05-15 16:55:07.351102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.363 [2024-05-15 16:55:07.351241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.363 [2024-05-15 16:55:07.351271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.363 [2024-05-15 16:55:07.351289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.363 [2024-05-15 16:55:07.351530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.363 [2024-05-15 16:55:07.351776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.363 [2024-05-15 16:55:07.351801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.363 [2024-05-15 16:55:07.351817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.363 [2024-05-15 16:55:07.355440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.363 [2024-05-15 16:55:07.364604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.363 [2024-05-15 16:55:07.365033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.363 [2024-05-15 16:55:07.365194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.363 [2024-05-15 16:55:07.365234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.363 [2024-05-15 16:55:07.365254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.363 [2024-05-15 16:55:07.365496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.363 [2024-05-15 16:55:07.365742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.363 [2024-05-15 16:55:07.365766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.363 [2024-05-15 16:55:07.365782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.369398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.378559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.378973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.379156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.379185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.379203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.379453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.379699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.379724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.379740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.383356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.392508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.392931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.393111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.393140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.393158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.393410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.393656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.393680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.393696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.397312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.406473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.406888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.407021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.407050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.407068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.407320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.407565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.407590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.407605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.411222] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.420383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.420801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.420931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.420960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.420978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.421228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.421475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.421499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.421515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.425122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.434289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.434706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.434862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.434891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.434910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.435150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.435405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.435431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.435447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.439055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.448220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.448660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.448858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.448895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.448929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.449171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.449426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.449451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.449466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.453073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.462235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.462730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.462890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.462919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.462938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.463179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.364 [2024-05-15 16:55:07.463434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.364 [2024-05-15 16:55:07.463459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.364 [2024-05-15 16:55:07.463475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.364 [2024-05-15 16:55:07.467080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.364 [2024-05-15 16:55:07.476248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.364 [2024-05-15 16:55:07.476662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.476815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.364 [2024-05-15 16:55:07.476843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.364 [2024-05-15 16:55:07.476862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.364 [2024-05-15 16:55:07.477103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.477361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.477386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.477402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.481012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.490186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.490613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.490829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.490883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.490902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.491144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.491400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.491425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.491440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.495046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.504203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.504611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.504794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.504823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.504841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.505082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.505338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.505363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.505379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.508985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.518147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.518572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.518708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.518737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.518755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.518996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.519252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.519277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.519293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.522899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.532052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.532464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.532624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.532655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.532679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.532922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.533168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.533192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.533208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.536828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.545990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.546419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.546583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.546612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.546630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.546871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.547117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.547142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.547157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.550774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.559929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.560355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.560512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.560541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.560559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.560799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.561044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.561069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.561085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.564701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.573855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.574279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.574419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.574448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.574466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.574713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.574959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.574983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.574999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.365 [2024-05-15 16:55:07.578645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.365 [2024-05-15 16:55:07.587828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.365 [2024-05-15 16:55:07.588207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.588390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.365 [2024-05-15 16:55:07.588420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.365 [2024-05-15 16:55:07.588438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.365 [2024-05-15 16:55:07.588679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.365 [2024-05-15 16:55:07.588925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.365 [2024-05-15 16:55:07.588951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.365 [2024-05-15 16:55:07.588967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.624 [2024-05-15 16:55:07.592617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.624 [2024-05-15 16:55:07.601805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.624 [2024-05-15 16:55:07.602230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.602414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.602443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.624 [2024-05-15 16:55:07.602461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.624 [2024-05-15 16:55:07.602703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.624 [2024-05-15 16:55:07.602949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.624 [2024-05-15 16:55:07.602973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.624 [2024-05-15 16:55:07.602989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.624 [2024-05-15 16:55:07.606606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.624 [2024-05-15 16:55:07.615778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.624 [2024-05-15 16:55:07.616202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.616343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.616372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.624 [2024-05-15 16:55:07.616390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.624 [2024-05-15 16:55:07.616632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.624 [2024-05-15 16:55:07.616883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.624 [2024-05-15 16:55:07.616908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.624 [2024-05-15 16:55:07.616924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.624 [2024-05-15 16:55:07.620540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.624 [2024-05-15 16:55:07.629702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.624 [2024-05-15 16:55:07.630119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.630277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.630307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.624 [2024-05-15 16:55:07.630325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.624 [2024-05-15 16:55:07.630567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.624 [2024-05-15 16:55:07.630813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.624 [2024-05-15 16:55:07.630838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.624 [2024-05-15 16:55:07.630853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.624 [2024-05-15 16:55:07.634468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.624 [2024-05-15 16:55:07.643627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.624 [2024-05-15 16:55:07.644080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.644240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.624 [2024-05-15 16:55:07.644270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.624 [2024-05-15 16:55:07.644288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.624 [2024-05-15 16:55:07.644530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.624 [2024-05-15 16:55:07.644775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.624 [2024-05-15 16:55:07.644799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.644815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.648427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.657586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.657982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.658161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.658190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.658208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.658458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.658704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.658734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.658750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.662366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.671526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.671957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.672081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.672110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.672128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.672379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.672626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.672650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.672666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.676283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.685444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.685888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.686073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.686102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.686120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.686373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.686619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.686643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.686659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.690273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.699428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.699858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.700021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.700052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.700070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.700320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.700566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.700590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.700611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.704246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.713417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.713868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.714049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.714077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.714095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.714348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.714597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.714621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.714637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.718253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.727426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.727884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.728033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.728062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.728079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.728331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.728589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.728613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.728629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.732243] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.741399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.741817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.742012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.742058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.742077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.742328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.742575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.742599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.742615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.746238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.755393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.755786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.755967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.755996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.756014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.756265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.756512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.756536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.756552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.760155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.769319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.769737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.769900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.769928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.625 [2024-05-15 16:55:07.769946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.625 [2024-05-15 16:55:07.770186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.625 [2024-05-15 16:55:07.770446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.625 [2024-05-15 16:55:07.770471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.625 [2024-05-15 16:55:07.770486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.625 [2024-05-15 16:55:07.774101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.625 [2024-05-15 16:55:07.783291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.625 [2024-05-15 16:55:07.783718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.625 [2024-05-15 16:55:07.783898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.783945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.626 [2024-05-15 16:55:07.783964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.626 [2024-05-15 16:55:07.784204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.626 [2024-05-15 16:55:07.784458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.626 [2024-05-15 16:55:07.784483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.626 [2024-05-15 16:55:07.784511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.626 [2024-05-15 16:55:07.788119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.626 [2024-05-15 16:55:07.797293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.626 [2024-05-15 16:55:07.797742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.797944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.797973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.626 [2024-05-15 16:55:07.797991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.626 [2024-05-15 16:55:07.798241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.626 [2024-05-15 16:55:07.798487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.626 [2024-05-15 16:55:07.798519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.626 [2024-05-15 16:55:07.798534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.626 [2024-05-15 16:55:07.802139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.626 [2024-05-15 16:55:07.811324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.626 [2024-05-15 16:55:07.811750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.811928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.811962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.626 [2024-05-15 16:55:07.811995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.626 [2024-05-15 16:55:07.812246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.626 [2024-05-15 16:55:07.812492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.626 [2024-05-15 16:55:07.812523] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.626 [2024-05-15 16:55:07.812539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.626 [2024-05-15 16:55:07.816145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.626 [2024-05-15 16:55:07.825318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.626 [2024-05-15 16:55:07.825710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.825931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.825964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.626 [2024-05-15 16:55:07.825998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.626 [2024-05-15 16:55:07.826251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.626 [2024-05-15 16:55:07.826498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.626 [2024-05-15 16:55:07.826522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.626 [2024-05-15 16:55:07.826538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.626 [2024-05-15 16:55:07.830147] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.626 [2024-05-15 16:55:07.839315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.626 [2024-05-15 16:55:07.839728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.839913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.626 [2024-05-15 16:55:07.839942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.626 [2024-05-15 16:55:07.839960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.626 [2024-05-15 16:55:07.840201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.626 [2024-05-15 16:55:07.840457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.626 [2024-05-15 16:55:07.840482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.626 [2024-05-15 16:55:07.840497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.626 [2024-05-15 16:55:07.844104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.885 [2024-05-15 16:55:07.853327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.885 [2024-05-15 16:55:07.853764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.853924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.853952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.885 [2024-05-15 16:55:07.853970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.885 [2024-05-15 16:55:07.854211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.885 [2024-05-15 16:55:07.854466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.885 [2024-05-15 16:55:07.854491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.885 [2024-05-15 16:55:07.854506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.885 [2024-05-15 16:55:07.858139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.885 [2024-05-15 16:55:07.867309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.885 [2024-05-15 16:55:07.867769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.867950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.867978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.885 [2024-05-15 16:55:07.867996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.885 [2024-05-15 16:55:07.868247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.885 [2024-05-15 16:55:07.868492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.885 [2024-05-15 16:55:07.868517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.885 [2024-05-15 16:55:07.868533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.885 [2024-05-15 16:55:07.872138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.885 [2024-05-15 16:55:07.881308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.885 [2024-05-15 16:55:07.881727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.881953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.882000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.885 [2024-05-15 16:55:07.882024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.885 [2024-05-15 16:55:07.882277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.885 [2024-05-15 16:55:07.882524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.885 [2024-05-15 16:55:07.882548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.885 [2024-05-15 16:55:07.882563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.885 [2024-05-15 16:55:07.886172] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.885 [2024-05-15 16:55:07.895333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.885 [2024-05-15 16:55:07.895786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.895917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.885 [2024-05-15 16:55:07.895946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.885 [2024-05-15 16:55:07.895964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.896205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.896462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.896486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.896502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.900109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.909278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.909722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.909922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.909956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.909991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.910247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.910493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.910518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.910534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.914140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.923301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.923715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.923876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.923905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.923923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.924174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.924431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.924456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.924472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.928078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.937261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.937801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.938010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.938039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.938057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.938310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.938557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.938581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.938597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.942202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.951148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.951696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.951961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.951990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.952007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.952259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.952506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.952530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.952545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.956154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.965105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.965525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.965736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.965765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.965783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.966024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.966284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.966309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.966324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.969929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.979091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.979632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.979913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.979942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.979960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.980201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.980457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.980481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.980497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.984104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:07.993049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:07.993462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.993598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:07.993627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:07.993645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:07.993885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:07.994131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:07.994155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:07.994171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:07.997785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:08.006976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:08.007416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:08.007681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:08.007711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:08.007729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:08.007970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:08.008226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:08.008255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:08.008272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:08.011883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:08.021054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.886 [2024-05-15 16:55:08.021458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:08.021617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.886 [2024-05-15 16:55:08.021645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.886 [2024-05-15 16:55:08.021663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.886 [2024-05-15 16:55:08.021903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.886 [2024-05-15 16:55:08.022149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.886 [2024-05-15 16:55:08.022173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.886 [2024-05-15 16:55:08.022188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.886 [2024-05-15 16:55:08.025811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.886 [2024-05-15 16:55:08.034967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.887 [2024-05-15 16:55:08.035391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.035545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.035574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.887 [2024-05-15 16:55:08.035592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.887 [2024-05-15 16:55:08.035833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.887 [2024-05-15 16:55:08.036078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.887 [2024-05-15 16:55:08.036102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.887 [2024-05-15 16:55:08.036118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.887 [2024-05-15 16:55:08.039742] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.887 [2024-05-15 16:55:08.048904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.887 [2024-05-15 16:55:08.049327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.049514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.049563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.887 [2024-05-15 16:55:08.049582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.887 [2024-05-15 16:55:08.049824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.887 [2024-05-15 16:55:08.050070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.887 [2024-05-15 16:55:08.050094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.887 [2024-05-15 16:55:08.050115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.887 [2024-05-15 16:55:08.053729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.887 [2024-05-15 16:55:08.062881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.887 [2024-05-15 16:55:08.063287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.063458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.063487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.887 [2024-05-15 16:55:08.063508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.887 [2024-05-15 16:55:08.063749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.887 [2024-05-15 16:55:08.063994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.887 [2024-05-15 16:55:08.064018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.887 [2024-05-15 16:55:08.064034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.887 [2024-05-15 16:55:08.067652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.887 [2024-05-15 16:55:08.076807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.887 [2024-05-15 16:55:08.077308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.077569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.077623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.887 [2024-05-15 16:55:08.077641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.887 [2024-05-15 16:55:08.077882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.887 [2024-05-15 16:55:08.078128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.887 [2024-05-15 16:55:08.078158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.887 [2024-05-15 16:55:08.078174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.887 [2024-05-15 16:55:08.081793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.887 [2024-05-15 16:55:08.090732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.887 [2024-05-15 16:55:08.091150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.091324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.091354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.887 [2024-05-15 16:55:08.091373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.887 [2024-05-15 16:55:08.091614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.887 [2024-05-15 16:55:08.091860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.887 [2024-05-15 16:55:08.091884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.887 [2024-05-15 16:55:08.091900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.887 [2024-05-15 16:55:08.095521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.887 [2024-05-15 16:55:08.104680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.887 [2024-05-15 16:55:08.105101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.105290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.887 [2024-05-15 16:55:08.105320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:00.887 [2024-05-15 16:55:08.105338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:00.887 [2024-05-15 16:55:08.105580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:00.887 [2024-05-15 16:55:08.105826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.887 [2024-05-15 16:55:08.105850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.887 [2024-05-15 16:55:08.105866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.887 [2024-05-15 16:55:08.109512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.118753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.119174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.119315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.119344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.119362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.119605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.119851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.119875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.119891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.123519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.132742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.133179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.133354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.133383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.133402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.133642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.133888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.133913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.133929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.137545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.146706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.147140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.147307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.147339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.147357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.147599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.147845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.147869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.147885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.151506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.160667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.161074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.161260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.161290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.161308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.161550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.161796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.161820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.161835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.165450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.174610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.175038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.175230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.175259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.175277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.175518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.175764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.175788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.175803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.179417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.188583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.189120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.189375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.189405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.189423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.189664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.189909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.189933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.189949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.193578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.202527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.202995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.203154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.203184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.203203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.203453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.203699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.203724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.203740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.147 [2024-05-15 16:55:08.207360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.147 [2024-05-15 16:55:08.216527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.147 [2024-05-15 16:55:08.217063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.217264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.147 [2024-05-15 16:55:08.217293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.147 [2024-05-15 16:55:08.217310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.147 [2024-05-15 16:55:08.217551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.147 [2024-05-15 16:55:08.217797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.147 [2024-05-15 16:55:08.217822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.147 [2024-05-15 16:55:08.217839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.221459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.230404] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.230822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.231077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.231135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.231154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.231409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.231656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.231681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.231697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.235313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.244478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.244896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.245053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.245083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.245101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.245354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.245601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.245626] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.245643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.249261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.258419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.258847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.259079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.259131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.259149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.259404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.259651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.259676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.259692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.263309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1936385 Killed "${NVMF_APP[@]}" "$@" 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1937340 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1937340 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1937340 ']' 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.148 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.148 [2024-05-15 16:55:08.272473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.272868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.273038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.273066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.273083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.273334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.273580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.273603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.273618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.277235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.286396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.286815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.286970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.286998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.287015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.287267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.287526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.287552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.287567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.291201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.300421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.300819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.300977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.301010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.301029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.301280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.301526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.301550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.301566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.305177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.314358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.314778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.314911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.314939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.314957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.315198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.315452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.315476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.148 [2024-05-15 16:55:08.315492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.148 [2024-05-15 16:55:08.318717] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:34:01.148 [2024-05-15 16:55:08.318799] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.148 [2024-05-15 16:55:08.319098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.148 [2024-05-15 16:55:08.328274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.148 [2024-05-15 16:55:08.328687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.328840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.148 [2024-05-15 16:55:08.328868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.148 [2024-05-15 16:55:08.328886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.148 [2024-05-15 16:55:08.329126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.148 [2024-05-15 16:55:08.329381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.148 [2024-05-15 16:55:08.329405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.149 [2024-05-15 16:55:08.329421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.149 [2024-05-15 16:55:08.333031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.149 [2024-05-15 16:55:08.342192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.149 [2024-05-15 16:55:08.342626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.149 [2024-05-15 16:55:08.342807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.149 [2024-05-15 16:55:08.342835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.149 [2024-05-15 16:55:08.342853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.149 [2024-05-15 16:55:08.343094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.149 [2024-05-15 16:55:08.343347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.149 [2024-05-15 16:55:08.343371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.149 [2024-05-15 16:55:08.343386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.149 [2024-05-15 16:55:08.346994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.149 [2024-05-15 16:55:08.356171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.149 [2024-05-15 16:55:08.356575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.149 [2024-05-15 16:55:08.356729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.149 [2024-05-15 16:55:08.356757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.149 [2024-05-15 16:55:08.356774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.149 [2024-05-15 16:55:08.357014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.149 [2024-05-15 16:55:08.357272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.149 [2024-05-15 16:55:08.357296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.149 [2024-05-15 16:55:08.357311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.149 [2024-05-15 16:55:08.361106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.149 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.149 [2024-05-15 16:55:08.370084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.149 [2024-05-15 16:55:08.370517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.149 [2024-05-15 16:55:08.370680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.149 [2024-05-15 16:55:08.370711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.149 [2024-05-15 16:55:08.370728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.149 [2024-05-15 16:55:08.370976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.149 [2024-05-15 16:55:08.371240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.149 [2024-05-15 16:55:08.371265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.149 [2024-05-15 16:55:08.371281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.374924] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.383642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.408 [2024-05-15 16:55:08.384102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.384255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.384281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.408 [2024-05-15 16:55:08.384297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.408 [2024-05-15 16:55:08.384531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.408 [2024-05-15 16:55:08.384756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.408 [2024-05-15 16:55:08.384776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.408 [2024-05-15 16:55:08.384789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.387928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.397048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.408 [2024-05-15 16:55:08.397451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.397596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.397622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.408 [2024-05-15 16:55:08.397638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.408 [2024-05-15 16:55:08.397855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.408 [2024-05-15 16:55:08.398077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.408 [2024-05-15 16:55:08.398096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.408 [2024-05-15 16:55:08.398109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.401237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.406772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:01.408 [2024-05-15 16:55:08.410498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.408 [2024-05-15 16:55:08.410952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.411107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.411132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.408 [2024-05-15 16:55:08.411149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.408 [2024-05-15 16:55:08.411394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.408 [2024-05-15 16:55:08.411623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.408 [2024-05-15 16:55:08.411644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.408 [2024-05-15 16:55:08.411658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.414760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.423918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.408 [2024-05-15 16:55:08.424502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.424697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.424732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.408 [2024-05-15 16:55:08.424752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.408 [2024-05-15 16:55:08.425004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.408 [2024-05-15 16:55:08.425241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.408 [2024-05-15 16:55:08.425264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.408 [2024-05-15 16:55:08.425280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.428434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.437480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.408 [2024-05-15 16:55:08.437905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.438040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.438066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.408 [2024-05-15 16:55:08.438082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.408 [2024-05-15 16:55:08.438312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.408 [2024-05-15 16:55:08.438551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.408 [2024-05-15 16:55:08.438586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.408 [2024-05-15 16:55:08.438599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.441696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.450837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.408 [2024-05-15 16:55:08.451297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.451423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.408 [2024-05-15 16:55:08.451450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.408 [2024-05-15 16:55:08.451467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.408 [2024-05-15 16:55:08.451701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.408 [2024-05-15 16:55:08.451910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.408 [2024-05-15 16:55:08.451929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.408 [2024-05-15 16:55:08.451943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.408 [2024-05-15 16:55:08.455039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.408 [2024-05-15 16:55:08.464171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.464757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.464928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.464955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.464984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.465247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.465481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.465502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.465518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.468635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.477622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.478049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.478179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.478204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.478229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.478465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.478689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.478710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.478723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.481823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.490969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.491352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.491501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.491526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.491543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.491762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.492008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.492028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.492041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.493959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.409 [2024-05-15 16:55:08.493993] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.409 [2024-05-15 16:55:08.494022] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.409 [2024-05-15 16:55:08.494034] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.409 [2024-05-15 16:55:08.494045] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.409 [2024-05-15 16:55:08.494131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.409 [2024-05-15 16:55:08.494198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.409 [2024-05-15 16:55:08.494201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.409 [2024-05-15 16:55:08.495320] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.504642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.505186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.505354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.505381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.505400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.505627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.505851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.505873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.505889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.509158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.518316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.518842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.518989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.519015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.519036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.519271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.519497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.519519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.519535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.522840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.531856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.532405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.532604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.532630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.532650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.532894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.533112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.533133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.533149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.536392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.545489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.546064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.546230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.546256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.546275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.546502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.546753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.546775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.546792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.550198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.559315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.559781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.559942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.559968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.559986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.560233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.560471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.560493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.560510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.563871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.409 [2024-05-15 16:55:08.572917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.409 [2024-05-15 16:55:08.573456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.573631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.409 [2024-05-15 16:55:08.573659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.409 [2024-05-15 16:55:08.573679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.409 [2024-05-15 16:55:08.573922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.409 [2024-05-15 16:55:08.574140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.409 [2024-05-15 16:55:08.574161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.409 [2024-05-15 16:55:08.574177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.409 [2024-05-15 16:55:08.577444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.410 [2024-05-15 16:55:08.586477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.410 [2024-05-15 16:55:08.586934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.587091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.587117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.410 [2024-05-15 16:55:08.587134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.410 [2024-05-15 16:55:08.587366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.410 [2024-05-15 16:55:08.587602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.410 [2024-05-15 16:55:08.587623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.410 [2024-05-15 16:55:08.587638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.410 [2024-05-15 16:55:08.590829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.410 [2024-05-15 16:55:08.600171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.410 [2024-05-15 16:55:08.600537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.600685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.600711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.410 [2024-05-15 16:55:08.600727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.410 [2024-05-15 16:55:08.600944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.410 [2024-05-15 16:55:08.601165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.410 [2024-05-15 16:55:08.601186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.410 [2024-05-15 16:55:08.601200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.410 [2024-05-15 16:55:08.604495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.410 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.410 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:01.410 16:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:01.410 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.410 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.410 [2024-05-15 16:55:08.613827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.410 [2024-05-15 16:55:08.614231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.614365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.614391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.410 [2024-05-15 16:55:08.614407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.410 [2024-05-15 16:55:08.614638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.410 [2024-05-15 16:55:08.614853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.410 [2024-05-15 16:55:08.614873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.410 [2024-05-15 16:55:08.614895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.410 [2024-05-15 16:55:08.618185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.410 [2024-05-15 16:55:08.627472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.410 [2024-05-15 16:55:08.627854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.627973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.410 [2024-05-15 16:55:08.627999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.410 [2024-05-15 16:55:08.628015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.410 [2024-05-15 16:55:08.628240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.410 [2024-05-15 16:55:08.628462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.410 [2024-05-15 16:55:08.628484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.410 [2024-05-15 16:55:08.628513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.410 [2024-05-15 16:55:08.631854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.668 [2024-05-15 16:55:08.641153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.668 [2024-05-15 16:55:08.641566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.641706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.641731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.668 [2024-05-15 16:55:08.641747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.668 [2024-05-15 16:55:08.641963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.668 [2024-05-15 16:55:08.642192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.668 [2024-05-15 16:55:08.642237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.668 [2024-05-15 16:55:08.642251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.668 [2024-05-15 16:55:08.644591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.668 [2024-05-15 16:55:08.645566] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.668 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.668 [2024-05-15 16:55:08.654749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.668 [2024-05-15 16:55:08.655144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.655276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.655302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.668 [2024-05-15 16:55:08.655322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.668 [2024-05-15 16:55:08.655539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.668 [2024-05-15 16:55:08.655768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.668 [2024-05-15 16:55:08.655788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.668 [2024-05-15 16:55:08.655802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.668 [2024-05-15 16:55:08.659027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.668 [2024-05-15 16:55:08.668180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.668 [2024-05-15 16:55:08.668578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.668729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.668754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.668 [2024-05-15 16:55:08.668770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.668 [2024-05-15 16:55:08.669000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.668 [2024-05-15 16:55:08.669213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.668 [2024-05-15 16:55:08.669258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.668 [2024-05-15 16:55:08.669271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.668 [2024-05-15 16:55:08.672485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.668 [2024-05-15 16:55:08.681876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.668 [2024-05-15 16:55:08.682479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.682644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.668 [2024-05-15 16:55:08.682670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.668 [2024-05-15 16:55:08.682689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.669 [2024-05-15 16:55:08.682931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.669 [2024-05-15 16:55:08.683150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.669 [2024-05-15 16:55:08.683171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.669 [2024-05-15 16:55:08.683187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.669 [2024-05-15 16:55:08.686491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.669 Malloc0 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.669 [2024-05-15 16:55:08.695484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.669 [2024-05-15 16:55:08.695876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.669 [2024-05-15 16:55:08.696029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.669 [2024-05-15 16:55:08.696055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999d30 with addr=10.0.0.2, port=4420 00:34:01.669 [2024-05-15 16:55:08.696071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999d30 is same with the state(5) to be set 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:01.669 [2024-05-15 16:55:08.696296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999d30 (9): Bad file descriptor 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.669 [2024-05-15 16:55:08.696517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.669 [2024-05-15 16:55:08.696554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.669 [2024-05-15 16:55:08.696567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.669 [2024-05-15 16:55:08.699907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.669 [2024-05-15 16:55:08.707608] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:01.669 [2024-05-15 16:55:08.707888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.669 [2024-05-15 16:55:08.709062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.669 16:55:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1936672 00:34:01.669 [2024-05-15 16:55:08.747119] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:11.642 00:34:11.642 Latency(us) 00:34:11.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:11.642 Verification LBA range: start 0x0 length 0x4000 00:34:11.642 Nvme1n1 : 15.01 6626.43 25.88 8382.07 0.00 8503.49 898.09 24175.50 00:34:11.642 =================================================================================================================== 00:34:11.642 Total : 6626.43 25.88 8382.07 0.00 8503.49 898.09 24175.50 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:11.642 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:11.642 rmmod nvme_tcp 00:34:11.642 rmmod nvme_fabrics 00:34:11.643 rmmod nvme_keyring 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1937340 ']' 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1937340 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1937340 ']' 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1937340 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1937340 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1937340' 00:34:11.643 killing process with pid 1937340 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1937340 00:34:11.643 [2024-05-15 16:55:18.139778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1937340 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.643 16:55:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.542 16:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:13.542 00:34:13.542 real 0m23.012s 00:34:13.542 user 0m59.787s 00:34:13.542 sys 0m4.800s 00:34:13.542 16:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:13.542 16:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.542 ************************************ 00:34:13.542 END TEST nvmf_bdevperf 00:34:13.542 ************************************ 00:34:13.542 16:55:20 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:13.542 16:55:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:13.542 16:55:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:13.542 16:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.542 ************************************ 00:34:13.542 START TEST nvmf_target_disconnect 00:34:13.542 ************************************ 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:13.542 * Looking for test storage... 00:34:13.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.542 16:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.543 16:55:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:16.069 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:16.070 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:16.070 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:16.070 Found net devices under 0000:09:00.0: cvl_0_0 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:16.070 Found net devices under 0000:09:00.1: cvl_0_1 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:16.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:34:16.070 00:34:16.070 --- 10.0.0.2 ping statistics --- 00:34:16.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.070 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:34:16.070 00:34:16.070 --- 10.0.0.1 ping statistics --- 00:34:16.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.070 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.070 16:55:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.070 ************************************ 00:34:16.070 START TEST nvmf_target_disconnect_tc1 00:34:16.070 ************************************ 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:16.071 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.071 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.328 [2024-05-15 16:55:23.323157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.328 [2024-05-15 16:55:23.323384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.328 [2024-05-15 16:55:23.323417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d1520 with addr=10.0.0.2, port=4420 00:34:16.329 [2024-05-15 16:55:23.323455] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:16.329 [2024-05-15 16:55:23.323483] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:16.329 [2024-05-15 16:55:23.323499] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:16.329 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:16.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:16.329 Initializing NVMe Controllers 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:16.329 00:34:16.329 real 0m0.109s 00:34:16.329 user 0m0.037s 00:34:16.329 sys 0m0.072s 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:16.329 ************************************ 00:34:16.329 END TEST nvmf_target_disconnect_tc1 00:34:16.329 ************************************ 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.329 ************************************ 00:34:16.329 START TEST nvmf_target_disconnect_tc2 00:34:16.329 ************************************ 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1940779 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1940779 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1940779 ']' 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:16.329 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.329 [2024-05-15 16:55:23.440055] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:34:16.329 [2024-05-15 16:55:23.440136] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.329 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.329 [2024-05-15 16:55:23.514753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.587 [2024-05-15 16:55:23.602476] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.587 [2024-05-15 16:55:23.602545] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.587 [2024-05-15 16:55:23.602559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.587 [2024-05-15 16:55:23.602570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.587 [2024-05-15 16:55:23.602580] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.587 [2024-05-15 16:55:23.602660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.587 [2024-05-15 16:55:23.602728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.587 [2024-05-15 16:55:23.602793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.587 [2024-05-15 16:55:23.602796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.587 Malloc0 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.587 [2024-05-15 16:55:23.781685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.587 [2024-05-15 16:55:23.809675] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:16.587 [2024-05-15 16:55:23.809962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.587 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1940919 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:16.844 16:55:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.844 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.748 16:55:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1940779 00:34:18.748 16:55:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 [2024-05-15 16:55:25.836397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Read completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.748 Write completed with error (sct=0, sc=8) 00:34:18.748 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 [2024-05-15 16:55:25.836724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 [2024-05-15 16:55:25.837023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Read completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 Write completed with error (sct=0, sc=8) 00:34:18.749 starting I/O failed 00:34:18.749 [2024-05-15 16:55:25.837375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.749 [2024-05-15 16:55:25.837577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.837746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.837790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.837978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.838154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.838184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.838357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.838485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.838516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.838692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.838873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.838902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.839072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.839221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.839247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.839386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.839500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.839526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.839654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.839820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.839847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.840037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.840156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.840185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.840336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.840460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.840486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.840661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.840820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.840849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.841032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.841181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.841210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.841367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.841474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.841502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.749 qpair failed and we were unable to recover it. 00:34:18.749 [2024-05-15 16:55:25.841710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.749 [2024-05-15 16:55:25.841895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.841922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.842136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.842343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.842371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.842526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.842726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.842753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.842893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.843046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.843074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.843243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.843389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.843414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.843571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.843722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.843764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.843890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.844030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.844072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.844200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.844348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.844374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.844491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.844618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.844643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.844820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.845001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.845031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.845211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.845384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.845410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.845529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.845684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.845710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.846008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.846226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.846253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.846393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.846506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.846532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.846670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.846817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.846845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.847023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.847167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.847192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.847364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.847480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.847516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.847745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.847906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.847949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.848087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.848264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.848291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.848405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.848584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.848610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.848760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.848904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.848929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.849054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.849199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.849232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.849386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.849499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.849529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.849730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.849898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.849923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.850061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.850235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.850272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.850383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.850501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.850527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.850668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.850785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.850811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.850924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.851067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.851094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.851268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.851440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.851470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-15 16:55:25.851635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.851800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-15 16:55:25.851826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.851950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.852124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.852150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.852302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.852464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.852490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.852639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.852802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.852828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.852992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.853158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.853184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.853341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.853490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.853516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.853657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.853794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.853836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.853967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.854167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.854192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.854351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.854525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.854551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.854695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.854835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.854877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.855028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.855190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.855223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.855368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.855549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.855583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.855773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.855916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.855959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.856158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.856308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.856335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.856447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.856614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.856640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.856759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.856925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.856950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.857091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.857231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.857258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.857392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.857509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.857535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.857678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.857796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.857822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.857999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.858254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.858499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.858771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.858955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.859116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.859266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.859296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.859431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.859544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.859571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.859754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.859921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.859957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.860123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.860285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.860311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.860480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.860623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.860665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.860788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.860949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.860975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-15 16:55:25.861102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-15 16:55:25.861251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.861278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.861396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.861543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.861569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.861711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.861878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.861905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.862081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.862296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.862326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.862500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.862614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.862640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.862757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.862893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.862923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.863073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.863239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.863274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.863387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.863523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.863549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.863690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.863828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.863853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.864013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.864192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.864227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.864374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.864491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.864516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.864631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.864764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.864792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.864944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.865115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.865141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.865319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.865440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.865466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.865643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.865789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.865816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.865964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.866130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.866157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.866315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.866431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.866456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.866672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.866836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.866862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.866982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.867135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.867163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.867351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.867469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.867494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.867614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.867773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.867801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.867932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.868072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.868098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.868276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.868394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.868419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.868565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.868720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.868757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.868942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.869131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.869159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.869316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.869437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.869462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.869629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.869765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-15 16:55:25.869791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-15 16:55:25.869935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.870091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.870119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.870283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.870404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.870429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.870542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.870693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.870729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.870893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.871057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.871083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.871289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.871477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.871505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.871652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.871832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.871865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.872021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.872151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.872181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.872297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.872442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.872467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.872624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.872771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.872795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.872914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.873021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.873046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.873186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.873356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.873382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.873563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.873720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.873749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.873978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.874123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.874150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.874294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.874469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.874494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.874659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.874790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.874821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.874970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.875131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.875157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.875295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.875433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.875458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.875629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.875802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.875829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.875969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.876105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.876134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.876317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.876451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.876476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.876586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.876715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.876740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.876879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.877056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-15 16:55:25.877081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-15 16:55:25.877248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.877414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.877439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.877601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.877747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.877772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.877955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.878139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.878167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.878342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.878455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.878480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.878643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.878777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.878802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.878981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.879110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.879140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.879314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.879435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.879462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.879598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.879776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.879804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.879949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.880087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.880113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.880237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.880379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.880405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.880583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.880755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.880783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.880937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.881089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.881117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.881277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.881412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.881438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.881619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.881732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.881759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.881885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.882037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.882066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.882233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.882354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.882380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.882499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.882652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.882680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.882867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.883195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.883487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.883842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.883971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.884001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.884137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.884300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.884326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.884503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.884668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.884694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.884833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.884977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.885005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.885238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.885366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.885391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.885505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.885680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.885706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.885934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.886097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.886123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.886273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.886385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.886410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.886527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.886664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.886690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.886854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.887031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-15 16:55:25.887059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-15 16:55:25.887192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.887349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.887374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.887517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.887693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.887719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.887860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.888176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.888505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.888842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.888988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.889021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.889184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.889322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.889348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.889492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.889685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.889711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.889855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.889991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.890186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.890463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.890824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.890992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.891144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.891275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.891301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.891440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.891592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.891619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.891777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.891914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.891943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.892121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.892272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.892298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.892420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.892530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.892556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.892721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.892880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.892906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.893021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.893137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.893163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.893285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.893407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.893432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.893595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.893733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.893759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.893874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.894206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.894480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.894785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.894943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.895134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.895287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.895313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.895439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.895569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.895598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.895760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.895926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.895952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.896098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.896276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.896302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-15 16:55:25.896421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-15 16:55:25.896559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.896589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.896733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.896872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.896898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.897009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.897112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.897136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.897266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.897377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.897403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.897545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.897686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.897713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.897882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.898067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.898096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.898287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.898403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.898429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.898572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.898715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.898741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.898904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.899073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.899099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.899243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.899383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.899411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.899581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.899693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.899719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.899900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.900079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.900107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.900270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.900380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.900405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.900516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.900679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.900705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.900892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.901058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.901084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.901206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.901357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.901387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.901559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.901700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.901745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.901899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.902082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.902111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.902245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.902413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.902439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.902584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.902692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.902719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.902860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.903209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.903483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.903783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.903967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.904122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.904255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.904284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.904423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.904537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.904563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.904673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.904838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.904866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.905016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.905135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.905167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.905315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.905461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.905488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-15 16:55:25.905638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.905797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-15 16:55:25.905824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.905980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.906131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.906158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.906333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.906448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.906473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.906648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.906835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.906864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.906995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.907136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.907164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.907319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.907450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.907491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.907658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.907794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.907824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.907980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.908160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.908189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.908340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.908463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.908497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.908637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.908761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.908790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.908951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.909094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.909123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.909286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.909401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.909427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.909554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.909692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.909720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.909868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.910017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.910046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.910213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.910352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.910377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.910491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.910691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.910720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.910874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.911027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.911057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.911224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.911342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.911383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.911583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.911729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.911771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.911933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.912084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.912113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.912258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.912395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.912437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.912596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.912767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.912796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.912921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.913067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.913096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.913249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.913395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.913421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.913599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.913718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.913744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.913884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.914032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-15 16:55:25.914061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-15 16:55:25.914206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.914328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.914354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.914513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.914643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.914672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.914822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.914971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.914999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.915133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.915266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.915292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.915434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.915609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.915637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.915810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.915948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.915974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.916085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.916251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.916297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.916449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.916626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.916654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.916803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.916965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.916994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.917126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.917279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.917305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.917501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.917647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.917673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.917792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.917942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.917969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.918118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.918241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.918271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.918384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.918498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.918526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.918658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.918799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.918825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.918971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.919139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.919181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.919383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.919504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.919538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.919681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.919864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.919893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.920035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.920197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.920228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.920374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.920497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.920526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.920692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.920811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.920837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.920969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.921082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.921109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.921258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.921394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.921419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.921559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.921713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.921746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.921934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.922221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.922544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.922874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.922996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.923022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.923161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.923308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.923334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-15 16:55:25.923469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-15 16:55:25.923646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.923675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.923818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.923929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.923955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.924139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.924279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.924307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.924434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.924571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.924600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.924739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.924883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.924913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.925068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.925239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.925296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.925447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.925590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.925616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.925757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.925899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.925925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.926097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.926268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.926295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.926421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.926561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.926590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.926742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.926889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.926915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.927048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.927205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.927252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.927411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.927552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.927578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.927692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.927833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.927859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.928030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.928164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.928193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.928350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.928461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.928488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.928628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.928774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.928801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.928944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.929222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.929513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.929811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.929979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.930165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.930476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.930812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.930990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.931109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.931271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.931300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.931467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.931575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.931602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.931759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.931905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.931935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.932063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.932246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.932275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.932426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.932572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.932599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.932763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.932916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.932944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-15 16:55:25.933090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.933235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-15 16:55:25.933264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.933405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.933517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.933543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.933733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.933884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.933913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.934065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.934231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.934258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.934384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.934500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.934526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.934703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.934883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.934909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.935065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.935192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.935226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.935369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.935485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.935512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.935631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.935740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.935766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.935916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.936021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.936047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.936249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.936371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.936396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.936575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.936738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.936766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.936920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.937097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.937125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.937276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.937396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.937422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.937591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.937776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.937806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.937959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.938099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.938128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.938280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.938401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.938427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.938590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.938748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.938776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.938923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.939069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.939098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.939259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.939401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.939427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.939568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.939748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.939778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.939957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.940278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.940537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.940796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.940990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.941167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.941292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.941323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.941459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.941613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.941643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.941770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.941887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.941916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.942052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.942184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.942210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.942403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.942513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.942539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-15 16:55:25.942679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.942823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-15 16:55:25.942850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.943022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.943169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.943196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.943374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.943495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.943524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.943703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.943872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.943919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.944053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.944156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.944182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.944305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.944412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.944438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.944582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.944748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.944777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.944943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.945271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.945527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.945788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.945954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.946103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.946256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.946285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.946453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.946594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.946621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.946738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.946854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.946881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.947012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.947167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.947195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.947360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.947484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.947513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.947675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.947783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.947810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.947944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.948120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.948149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.948275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.948407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.948436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.948596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.948726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.948752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.948913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.949039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.949067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.949270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.949403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.949432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.949601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.949740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.949766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.949917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.950078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.950104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.950211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.950339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.950384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.950531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.950702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.950728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.950866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.951009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.951035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.951172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.951303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.951346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.951486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.951596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-15 16:55:25.951622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-15 16:55:25.951780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.951946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.951973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.952111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.952258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.952284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.952405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.952521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.952546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.952651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.952790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.952816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.952979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.953266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.953520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.953821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.953973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.954135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.954252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.954279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.954411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.954578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.954604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.954744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.954888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.954914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.955080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.955212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.955267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.955436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.955578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.955605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.955753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.955907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.955935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.956131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.956254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.956299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.956425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.956579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.956607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.956760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.956894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.956922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.957104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.957243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.957289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.957452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.957582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.957611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.957793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.957965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.957991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.958127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.958269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.958312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.958476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.958612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.958638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.958771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.958947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.958989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.959174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.959307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.959351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.959478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.959612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.959640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-15 16:55:25.959767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.959956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-15 16:55:25.959982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.960167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.960307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.960333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.960449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.960610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.960636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.960838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.960969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.960997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.961154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.961267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.961294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.961441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.961596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.961625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.961772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.961940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.961965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.962088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.962226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.962252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.962406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.962541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.962572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.962704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.962872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.962898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.963035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.963173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.963199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.963348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.963504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.963532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.963684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.963843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.963871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.964044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.964161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.964188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.964355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.964470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.964496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.964658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.964840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.964866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.965014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.965152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.965179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.965380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.965541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.965573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.965709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.965866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.965896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.966062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.966209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.966245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.966373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.966510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.966539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.966686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.966828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.966854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-15 16:55:25.966970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.967090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-15 16:55:25.967118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.967282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.967398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.967424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.967615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.967793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.967822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.967959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.968248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.968533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.968812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.968969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.969099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.969266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.969293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.969466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.969638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.969667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.969851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.970224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.970533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.970885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.970999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.971203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.971475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-15 16:55:25.971749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-15 16:55:25.971961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.972117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.972306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.972333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.972455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.972606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.972635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.972763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.972914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.972940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.973084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.973285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.973314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.973453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.973579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.973610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.973776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.973923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.973951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.974151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.974298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.974325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.974436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.974618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.974646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.974784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.974970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.974999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.975118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.975270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.975300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.975434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.975620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.975646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.975811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.975948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.975991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.976145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.976315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.976341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.976486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.976596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.976623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.976769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.976881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.976907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.977069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.977232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.977260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.977416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.977542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.977568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.977686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.977802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.977828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.977969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.978113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.978139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.978264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.978424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.978450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.978592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.978699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.978725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.978846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.979003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.979034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.979194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.979338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.979364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.979526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.979674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.979704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.979857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.980149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.980439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.980750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.980919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.981105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.981268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.981297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-15 16:55:25.981457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-15 16:55:25.981629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.981672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.981863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.982148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.982452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.982771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.982937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.983115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.983287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.983314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.983430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.983573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.983599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.983722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.983858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.983884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.984064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.984225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.984251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.984364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.984505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.984532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.984697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.984816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.984858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.985016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.985141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.985170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.985325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.985447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.985475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.985643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.985822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.985850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.985988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.986133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.986159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.986293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.986407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.986434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.986628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.986748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.986782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.986937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.987120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.987146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.987295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.987438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.987479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.987627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.987779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.987808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.987987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.988146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.988172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.988297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.988448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.988476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.988662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.988799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.988826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.988985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.989159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.989185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.989338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.989530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.989559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.989698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.989861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.989890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.990077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.990194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.990245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.990382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.990537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.990565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.990744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.990890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.990918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.991063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.991231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-15 16:55:25.991277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-15 16:55:25.991433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.991575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.991617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.991763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.991898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.991925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.992072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.992247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.992277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.992409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.992575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.992601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.992745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.992886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.992928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.993069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.993235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.993262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.993412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.993589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.993622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.993773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.993921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.993950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.994138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.994271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.994298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.994478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.994749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.994802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.994992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.995160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.995186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.995379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.995550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.995597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.995765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.995879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.995905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.996074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.996261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.996288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.996454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.996588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.996617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.996772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.996928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.996958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.997095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.997248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.997278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.997480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.997594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.997620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.997763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.997956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.997984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.998145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.998297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.998326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.998493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.998656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.998681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.998816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.998984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.999136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.999441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:25.999781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:25.999948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:26.000088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.000244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.000274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:26.000462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.000626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.000667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:26.000825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.001011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.001040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:26.001228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.001393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.001419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-15 16:55:26.001562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-15 16:55:26.001699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.001727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.001929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.002073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.002099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.002245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.002397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.002423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.002529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.002692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.002717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.002904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.003035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.003066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.003228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.003402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.003431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.003616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.003736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.003766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.003927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.004064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.004091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.004262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.004459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.004485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.004628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.004817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.004846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.005011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.005150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.005176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.005341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.005472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.005499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.005617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.005722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.005748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.005913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.006044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.006074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.006231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.006391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.006417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.006562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.006685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.006712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.006896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.007060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.007085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.007264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.007430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.007456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.007581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.007719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.007745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.007913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.008071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.008102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.008262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.008453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.008482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.008637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.008815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.008843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.009001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.009156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.009186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.009362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.009485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.009528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-15 16:55:26.009653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.009800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-15 16:55:26.009828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.009952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.010105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.010134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.010261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.010393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.010419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.010566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.010676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.010703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.010867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.011048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.011077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.011232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.011358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.011386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.011593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.011772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.011801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.011969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.012093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.012120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.012268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.012430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.012459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.012583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.012764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.012793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.012924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.013061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.013087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.013229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.013373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.013399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.013530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.013714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.013743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.013923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.014075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.014105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.014276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.014436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.014463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.014606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.014770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.014797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.014924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.015089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.015115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.015254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.015414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.015444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.015648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.015851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.015880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.016033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.016174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.016201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.016392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.016528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.016554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.016717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.016954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.017016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.017195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.017390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.017417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.017581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.017695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.017737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.017897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.018074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.018103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.018285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.018420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.018446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.018607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.018835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.018891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.019055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.019191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.019225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.019373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.019551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.019580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.019731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.019887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.019915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-15 16:55:26.020039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-15 16:55:26.020230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.020257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.020397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.020538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.020579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.020734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.020901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.020927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.021073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.021242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.021285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.021452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.021594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.021622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.021767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.021903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.021945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.022100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.022235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.022263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.022434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.022622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.022669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.022817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.022967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.022996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.023133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.023314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.023359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.023514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.023670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.023699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.023895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.024036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.024063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.024247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.024378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.024408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.024554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.024727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.024753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.024959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.025127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.025171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.025328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.025486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.025514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.025703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.025846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.025872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.026037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.026195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.026229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.026399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.026508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.026535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.026676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.026825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.026854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.027033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.027162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.027191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.027326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.027491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.027518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.027692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.027828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.027854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.028018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.028173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.028202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.028338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.028505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.028533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.028725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.028841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.028882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.029038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.029176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.029203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.029339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.029477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.029504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.029641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.029831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.029857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.030001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.030168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-15 16:55:26.030194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-15 16:55:26.030366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.030485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.030511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.030650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.030780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.030808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.030980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.031155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.031180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.031358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.031519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.031549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.031707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.031883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.031909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.032047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.032175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.032204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.032380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.032544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.032597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.032756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.032895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.032923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.033066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.033173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.033200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.033377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.033517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.033545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.033855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.034038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.034068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.034251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.034373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.034399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.034543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.034701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.034731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.034891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.035070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.035099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.035280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.035456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.035482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.035644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.035760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.035786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.035950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.036094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.036120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.036290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.036451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.036477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.036613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.036754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.036780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.036988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.037139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.037165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.037307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.037447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.037473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.037639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.037803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.037832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.037994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.038181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.038207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.038334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.038475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.038501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.038633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.038836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.038862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.038995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.039127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.039156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.039317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.039486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.039529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.039693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.039832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.039859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.039971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.040133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.040159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.040330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.040450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-15 16:55:26.040479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-15 16:55:26.040666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.040920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.040978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.041146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.041290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.041318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.041495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.041758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.041813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.041968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.042159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.042185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.042389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.042548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.042582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.042767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.042909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.042952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.043104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.043238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.043267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.043458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.043574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.043601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.043831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.044014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.044042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.044198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.044322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.044349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.044519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.044701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.044730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.044891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.045170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.045455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.045732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.045930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.046110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.046261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.046290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.046420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.046597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.046649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.046815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.046922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.046948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.047109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.047261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.047291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.047423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.047611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.047637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.047782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.047987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.048013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.048126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.048290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.048333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.048493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.048654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.048688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.048874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.049043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.049080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.049225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.049342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.049383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.049558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.049700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.049726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.049839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.050023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.050051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.050185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.050353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.050379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.050498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.050660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.050686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-15 16:55:26.050824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.051004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-15 16:55:26.051032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.051179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.051367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.051397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.051575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.051733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.051761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.051939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.052111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.052136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.052241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.052377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.052403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.052545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.052686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.052716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.052905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.053056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.053085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.053273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.053436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.053463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.053628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.053766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.053794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.053939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.054048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.054076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.054268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.054433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.054459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.054600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.054771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.054800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.054989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.055164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.055206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.055408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.055593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.055622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.055802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.055976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.056004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.056165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.056325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.056355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.056525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.056664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.056690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.056888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.057025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.057051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.057224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.057338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.057365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.057538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.057767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.057831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.057973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.058117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.058144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.058315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.058471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.058500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.058673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.058844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.058870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.059024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.059160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.059190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.059372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.059514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.059539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.059680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.059862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.059890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-15 16:55:26.060029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-15 16:55:26.060207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.060245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.060410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.060572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.060601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.060770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.060934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.060975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.061124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.061316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.061343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.061463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.061614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.061642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.061797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.061936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.061962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.062108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.062260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.062287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.062481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.062729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.062782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.062935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.063129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.063156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.063281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.063423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.063449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.063626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.063765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.063790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.063927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.064052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.064080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.064224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.064410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.064438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.064689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.064862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.064925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.065065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.065200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.065234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.065407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.065584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.065612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.065782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.065923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.065948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.066093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.066294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.066321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.066440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.066561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.066587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.066733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.066899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.066925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.067071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.067244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.067273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.067443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.067612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.067637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.067803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.067925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.067950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.068068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.068224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.068268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.068432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.068559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.068587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.068776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.068904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.068933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.069102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.069238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.069265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.069456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.069646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.069788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.069950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.069978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.070110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.070268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-15 16:55:26.070298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-15 16:55:26.070467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.070605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.070646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.070806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.070943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.070968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.071116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.071253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.071279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.071445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.071619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.071648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.071828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.071994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.072019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.072225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.072365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.072391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.072555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.072740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.072765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.072916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.073049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.073078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.073248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.073374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.073400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.073549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.073712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.073740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.073934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.074068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.074093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.074240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.074371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.074399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.074528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.074701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.074726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.074911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.075113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.075138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.075284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.075393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.075419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.075543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.075760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.075813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.076002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.076115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.076141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.076301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.076439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.076465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.076601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.076784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.076810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.076952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.077244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.077493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.077805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.077989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.078133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.078284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.078314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.078492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.078604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.078631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.078771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.078930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.078958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.079112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.079313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.079339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.079456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.079601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.079627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.079802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.079959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.079989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-15 16:55:26.080182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.080352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-15 16:55:26.080381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.080533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.080695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.080723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.080900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.081041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.081066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.081204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.081356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.081399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.081556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.081746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.081772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.081917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.082053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.082078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.082246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.082357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.082384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.082548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.082704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.082732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.082888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.083184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.083540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.083854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.083999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.084025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.084136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.084334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.084363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.084526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.084677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.084705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.084889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.085022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.085048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.085264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.085423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.085452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.085638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.085783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.085811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.085990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.086143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.086171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.086338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.086475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.086500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.086643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.086761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.086786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.086926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.087039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.087064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.087244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.087388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.087413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.087613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.087815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.087841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.087954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.088100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.088126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.088310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.088456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.088484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.088638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.088789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.088817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.089014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.089117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.089142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.089251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.089363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.089388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.089506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.089669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.089699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.089889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.090002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.090029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-15 16:55:26.090171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-15 16:55:26.090282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.090308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.090414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.090534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.090559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.090722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.090897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.090925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.091078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.091229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.091272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.091418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.091559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.091600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.091770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.091880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.091922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.092110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.092244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.092270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.092459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.092628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.092654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.092771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.092905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.092930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.093050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.093168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.093194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.093366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.093547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.093575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.093706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.093825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.093850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.093998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.094156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.094186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.094348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.094512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.094555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.094710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.094904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.094929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.095093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.095202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.095238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.095429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.095593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.095619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.095755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.095898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.095943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.096125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.096282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.096311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.096479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.096590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.096616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.096777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.096928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.096957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.097102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.097248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.097279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.097480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.097621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.097646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.097773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.097926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.097954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.098110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.098274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.098304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.098467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.098578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.098604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.052 qpair failed and we were unable to recover it. 00:34:19.052 [2024-05-15 16:55:26.098748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.052 [2024-05-15 16:55:26.098941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.098967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.099082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.099294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.099320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.099484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.099648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.099676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.099841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.099953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.099979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.100135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.100321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.100348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.100493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.100683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.100716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.100852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.100982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.101010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.101148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.101288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.101314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.101478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.101676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.101729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.101883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.102044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.102070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.102203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.102384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.102413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.102583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.102720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.102746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.102893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.103054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.103095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.103278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.103406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.103434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.103577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.103725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.103754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.103917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.104057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.104087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.104278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.104456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.104488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.104641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.104792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.104821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.104984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.105166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.105194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.105368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.105501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.105527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.105665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.105788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.105817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.105999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.106160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.106187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.106353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.106512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.106541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.106702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.106813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.106839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.106993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.107127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.107153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.107296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.107408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.107434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.107618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.107801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.107831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.108012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.108128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.108172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.108339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.108489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.108530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.108710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.108835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.053 [2024-05-15 16:55:26.108865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.053 qpair failed and we were unable to recover it. 00:34:19.053 [2024-05-15 16:55:26.109045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.109170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.109200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.109388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.109550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.109576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.109729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.109888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.109915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.110059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.110246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.110279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.110459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.110684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.110710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.110875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.111227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.111530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.111842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.111993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.112021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.112189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.112369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.112398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.112553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.112681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.112709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.112867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.113140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.113423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.113724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.113935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.114052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.114237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.114273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.114448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.114565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.114591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.114730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.114835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.114861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.115040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.115181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.115207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.115333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.115482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.115507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.115647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.115784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.115809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.115966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.116108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.116133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.116270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.116424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.116452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.116599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.116767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.116792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.116940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.117135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.117162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.117277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.117443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.117468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.117642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.117837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.117894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.118051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.118241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.118268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.118387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.118544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.118573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.054 qpair failed and we were unable to recover it. 00:34:19.054 [2024-05-15 16:55:26.118757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.118898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.054 [2024-05-15 16:55:26.118942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.119125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.119294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.119321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.119432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.119563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.119592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.119746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.119928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.119956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.120144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.120289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.120333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.120461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.120616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.120645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.120801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.120946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.120977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.121161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.121317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.121346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.121525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.121733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.121811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.121973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.122155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.122184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.122328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.122512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.122542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.122808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.122958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.122992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.123167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.123312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.123338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.123482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.123617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.123646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.123767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.123946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.123974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.124146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.124265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.124292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.124466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.124608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.124635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.124781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.124976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.125004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.125132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.125263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.125293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.125428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.125664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.125725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.125888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.126213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.126473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.126762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.126944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.127114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.127256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.127285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.127425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.127582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.127611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.127744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.127888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.127914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.128061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.128258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.128284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.128454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.128616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.128645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.128802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.128968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.128994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.055 qpair failed and we were unable to recover it. 00:34:19.055 [2024-05-15 16:55:26.129140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.055 [2024-05-15 16:55:26.129307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.129333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.129484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.129653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.129695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.129829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.129945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.129971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.130137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.130281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.130308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.130456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.130597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.130627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.130806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.130925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.130953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.131118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.131263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.131307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.131469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.131659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.131687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.131846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.131999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.132028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.132155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.132319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.132348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.132509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.132623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.132649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.132816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.132994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.133023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.133223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.133351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.133376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.133546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.133697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.133725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.133859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.134006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.134032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.134156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.134331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.134358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.134562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.134720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.134758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.134915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.135101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.135130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.135321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.135439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.135464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.135648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.135792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.135820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.135943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.136094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.136123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.136285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.136412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.136442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.136649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.136776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.136814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.136963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.137145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.137172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.137324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.137459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.137485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.137687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.137853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.137879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.137986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.138100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.138128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.138274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.138443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.138470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.138630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.138740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.138766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.138931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.139087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.139115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.056 qpair failed and we were unable to recover it. 00:34:19.056 [2024-05-15 16:55:26.139285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.056 [2024-05-15 16:55:26.139420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.139461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.139642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.139802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.139828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.139959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.140109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.140137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.140260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.140415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.140440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.140583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.140747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.140791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.140926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.141082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.141113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.141317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.141425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.141451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.141580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.141748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.141776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.141922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.142046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.142075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.142237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.142383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.142408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.142556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.142695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.142722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.142851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.143171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.143467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.143803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.143997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.144150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.144277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.144321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.144448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.144566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.144593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.144788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.144938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.144963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.145111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.145281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.145307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.145420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.145612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.145640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.145821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.145935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.145978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.146161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.146302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.146328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.146467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.146670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.146751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.146905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.147058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.147086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.147259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.147368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.147393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.147546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.147734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.147759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.057 qpair failed and we were unable to recover it. 00:34:19.057 [2024-05-15 16:55:26.147931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.057 [2024-05-15 16:55:26.148049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.148077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.148233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.148376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.148406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.148585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.148772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.148800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.148924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.149080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.149106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.149272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.149412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.149437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.149574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20550f0 is same with the state(5) to be set 00:34:19.058 [2024-05-15 16:55:26.149762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.149985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.150042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.150226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.150347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.150373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.150501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.150632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.150677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.150967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.151102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.151127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.151270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.151385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.151410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.151599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.151742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.151768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.152050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.152242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.152285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.152408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.152569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.152597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.152750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.152904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.152932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.153083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.153277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.153303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.153462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.153624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.153652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.153820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.154203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.154513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.154790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.154970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.155127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.155298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.155325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.155462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.155608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.155634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.155790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.155914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.155943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.156137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.156332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.156358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.156480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.156683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.156709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.156889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.157019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.157047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.157211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.157345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.157371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.157486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.157698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.157773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.157957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.158123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.158151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.058 [2024-05-15 16:55:26.158316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.158456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.058 [2024-05-15 16:55:26.158482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.058 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.158626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.158792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.158821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.158976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.159167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.159195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.159373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.159498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.159523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.159645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.159807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.159836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.159967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.160098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.160126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.160294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.160438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.160464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.160654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.160817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.160846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.161007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.161137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.161166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.161320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.161432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.161459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.161606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.161791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.161820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.161976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.162100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.162129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.162319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.162471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.162651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.162793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.162821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.162984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.163211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.163262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.163412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.163548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.163590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.163785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.163968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.163996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.164242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.164403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.164430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.164570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.164733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.164774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.164927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.165115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.165140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.165325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.165472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.165497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.165694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.165799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.165824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.166069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.166306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.166334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.166456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.166605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.166633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.166821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.166960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.167004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.167186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.167328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.167354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.167492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.167748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.167807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.167948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.168085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.168111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.168287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.168452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.168481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.168603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.168781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.168810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.059 qpair failed and we were unable to recover it. 00:34:19.059 [2024-05-15 16:55:26.168996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.169112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.059 [2024-05-15 16:55:26.169137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.169294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.169425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.169453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.169605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.169770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.169799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.169940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.170078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.170103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.170224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.170380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.170408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.170586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.170722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.170748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.170923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.171067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.171092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.171226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.171363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.171392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.171521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.171716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.171745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.171879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.172236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.172562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.172849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.172989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.173036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.173198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.173383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.173411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.173588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.173714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.173742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.173905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.174258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.174581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.174864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.174999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.175024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.175214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.175381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.175406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.175571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.175697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.175725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.175882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.176001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.176027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.176142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.176294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.176327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.176469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.176692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.176750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.176935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.177053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.177097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.177229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.177378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.177403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.177568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.177740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.177768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.177931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.178075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.178116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.178296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.178474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.178502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.178656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.178839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.178866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.060 qpair failed and we were unable to recover it. 00:34:19.060 [2024-05-15 16:55:26.179031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.060 [2024-05-15 16:55:26.179186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.179235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.179394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.179516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.179544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.179710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.179847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.179876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.180035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.180177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.180203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.180332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.180465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.180493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.180645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.180803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.180832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.180966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.181129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.181154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.181315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.181459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.181484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.181645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.181833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.181858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.182001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.182121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.182146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.182293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.182528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.182554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.182714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.182876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.182901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.183088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.183221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.183250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.183420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.183555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.183585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.183757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.183921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.183947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.184091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.184233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.184281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.184472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.184587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.184615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.184795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.184940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.184965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.185076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.185203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.185242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.185387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.185539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.185569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.185736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.185865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.185894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.186081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.186227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.186265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.186405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.186625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.186685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.186851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.187029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.187058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.187228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.187411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.187436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.187592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.187716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.187745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.061 qpair failed and we were unable to recover it. 00:34:19.061 [2024-05-15 16:55:26.187912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.061 [2024-05-15 16:55:26.188050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.188076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.188239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.188389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.188415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.188622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.188864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.188919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.189097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.189270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.189296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.189442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.189581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.189607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.189768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.189946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.189974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.190154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.190322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.190348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.190529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.190672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.190695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.190847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.191227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.191479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.191752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.191936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.192126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.192296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.192321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.192471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.192588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.192612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.192736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.193515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.193548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.193707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.193843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.193869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.194017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.194131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.194157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.194340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.194481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.194507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.194653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.194796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.194822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.194979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.195122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.195166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.195332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.195527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.195595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.195732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.195920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.195945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.196088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.196244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.196278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.196421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.196535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.196560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.196678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.196817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.196844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.197004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.197145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.197169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.197286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.197438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.197464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.197637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.197815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.197846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.197988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.198131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.198157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.198314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.198457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.062 [2024-05-15 16:55:26.198492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.062 qpair failed and we were unable to recover it. 00:34:19.062 [2024-05-15 16:55:26.198636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.198781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.198810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.198952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.199106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.199132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.199269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.199419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.199445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.199607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.199737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.199766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.199912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.200051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.200078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.200241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.200406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.200435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.200570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.200753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.200782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.200934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.201058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.201084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.201199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.201329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.201354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.201507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.201670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.201698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.201866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.202046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.202075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.202254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.202383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.202409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.202558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.202711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.202740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.202894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.203013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.203039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.203231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.203405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.203430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.203601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.203777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.203806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.203976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.204111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.204137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.204339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.204483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.204508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.204663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.204824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.205014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.205182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.205207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.205372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.205502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.205531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.205664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.205825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.205853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.205996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.206144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.206172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.206327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.206439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.206464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.206607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.206747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.206772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.206906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.207021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.207046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.207223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.207414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.207442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.207585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.207775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.207800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.207975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.208119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.208144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.063 qpair failed and we were unable to recover it. 00:34:19.063 [2024-05-15 16:55:26.208292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.063 [2024-05-15 16:55:26.208410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.208435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.208561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.208720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.208750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.208911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.209170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.209519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.209817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.209980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.210139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.210459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.210788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.210934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.211096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.211202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.211235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.211377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.211556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.211580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.211749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.211855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.211880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.212267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.212445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.212486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.212644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.212804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.212830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.212947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.213064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.213091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.213207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.213379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.213408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.213543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.213718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.213743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.213881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.214021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.214063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.214231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.214370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.214396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.214521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.214705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.214733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.214897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.215041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.215083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.215282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.215390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.215415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.215549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.215710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.215738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.215880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.216051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.216093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.216263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.216395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.216423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.216614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.216755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.216781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.216937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.217083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.217109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.217240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.217384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.217413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.217587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.217735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.217761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.217906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.218068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.218093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.218259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.218368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.218394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.218520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.218705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.218734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-05-15 16:55:26.218901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.064 [2024-05-15 16:55:26.219046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.219072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.219244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.219401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.219428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.219579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.219733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.219758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.219897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.220031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.220057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.220265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.220387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.220414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.220576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.220745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.220772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.220913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.221059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.221085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.221258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.221413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.221438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.221552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.221725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.221750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.221895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.222063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.222089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.222237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.222368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.222397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.222554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.222709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.222738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.222896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.223033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.223076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.223239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.223361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.223389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.223564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.223707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.223732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.223902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.224017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.224042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.224181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.224352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.224385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.224538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.224688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.224728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.224919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.225078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.225106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.225282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.225444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.225469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.225614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.225729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.225759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.225909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.226088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.226117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.226280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.226403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.226430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.226561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.226707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.226736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.226896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.227077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.227106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.227231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.227386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.227413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.227578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.227710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.227743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.227904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.228063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.228089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-05-15 16:55:26.228234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.065 [2024-05-15 16:55:26.228405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.228433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.228607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.228732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.228761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.228953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.229074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.229101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.229238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.229415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.229443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.229606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.229759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.229787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.229925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.230227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.230494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.230824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.230966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.231176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.231505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.231819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.231977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.232158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.232308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.232337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.232465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.232619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.232651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.232792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.232958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.232984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.233099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.233263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.233293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.233445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.233640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.233665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.233777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.233915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.233941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.234108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.234267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.234301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.234431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.234621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.234649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.234810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.234952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.234978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.235123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.235266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.235292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.235414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.235564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.235594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.235765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.235907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.235933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.236121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.236278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.236308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.236456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.236569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.236595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.236740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.236852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.236877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.237046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.237226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.237280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.237444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.237633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.237680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.237822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.237987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.238012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.238183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.238351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.238380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.238518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.238686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.238715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.238884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.239017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.239043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.239181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.239380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.066 [2024-05-15 16:55:26.239409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.066 qpair failed and we were unable to recover it. 00:34:19.066 [2024-05-15 16:55:26.239578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.239732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.239761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.239946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.240108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.240137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.240295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.240475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.240505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.240640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.240756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.240782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.240925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.241061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.241087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.241211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.241416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.241446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.241580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.241738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.241766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.241895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.242036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.242061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.242199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.242368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.242397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.242577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.242700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.242729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.242894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.243034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.243060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.243201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.243349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.243377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.243555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.243700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.243725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.243892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.244230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.244556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.244880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.244987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.245133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.245446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.245797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.245962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.246095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.246289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.246315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.246431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.246553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.246579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.246698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.246838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.246864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.246979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.247109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.247139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.247315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.247440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.247465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.247640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.247822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.247850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.248043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.248174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.248200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.067 qpair failed and we were unable to recover it. 00:34:19.067 [2024-05-15 16:55:26.248326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.067 [2024-05-15 16:55:26.248492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.248518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.248657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.248799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.248825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.248961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.249123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.249164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.249353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.249477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.249503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.249619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.249742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.249768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.249943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.250099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.250127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.250323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.250445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.250471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.250646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.250815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.250856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.346 qpair failed and we were unable to recover it. 00:34:19.346 [2024-05-15 16:55:26.250987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.251165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.346 [2024-05-15 16:55:26.251194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.251384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.251510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.251539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.251717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.251842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.251868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.252019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.252187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.252221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.252365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.252546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.252572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.252685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.252831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.252856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.253025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.253177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.253206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.253349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.253463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.253489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.253638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.253756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.253786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.253929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.254237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.254574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.254851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.254999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.255027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.255185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.255318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.255358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.255515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.255630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.255657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.255805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.255989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.256176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.256510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.256776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.256935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.257111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.257243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.257270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.257423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.257547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.257574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.257753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.257876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.257906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.258059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.258176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.258205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.258354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.258471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.258497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.258672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.258812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.258838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.258959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.259123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.259153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.259328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.259471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.259497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.259632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.259795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.259821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.259963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.260148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.260176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.260381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.260496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.260539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.347 qpair failed and we were unable to recover it. 00:34:19.347 [2024-05-15 16:55:26.260747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.347 [2024-05-15 16:55:26.260974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.261033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.261227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.261372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.261401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.261544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.261662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.261689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.261836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.262079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.262133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.262284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.262406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.262432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.262573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.262708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.262738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.262912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.263066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.263095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.263253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.263389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.263415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.263530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.263647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.263673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.263860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.264183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.264466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.264809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.264947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.265102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.265293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.265320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.265437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.265545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.265571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.265693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.265860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.265886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.266039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.266181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.266210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.266347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.266486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.266513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.266828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.267026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.267052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.267196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.267319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.267345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.267511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.267727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.267785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.268071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.268260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.268287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.268406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.268549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.268594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.268774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.268882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.268907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.269090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.269227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.269253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.269370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.269553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.269601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.269754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.269898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.269924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.270060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.270285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.270314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.270450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.270583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.270612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.348 [2024-05-15 16:55:26.270781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.270899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.348 [2024-05-15 16:55:26.270925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.348 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.271078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.271221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.271247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.271391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.271527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.271553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.271740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.271879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.271905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.272069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.272333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.272376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.272488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.272666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.272694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.272866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.272990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.273015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.273159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.273355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.273382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.273540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.273666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.273695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.273846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.273985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.274155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.274492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.274793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.274957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.275141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.275269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.275298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.275455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.275611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.275640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.275788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.275967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.275996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.276184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.276414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.276441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.276606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.276798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.276855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.277043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.277189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.277222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.277366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.277508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.277537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.277752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.277912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.277940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.278139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.278257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.278301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.278430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.278566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.278596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.278726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.278858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.278888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.279013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.279158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.279185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.279386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.279545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.279573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.279754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.279933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.279962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.280120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.280226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.280253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.280398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.280569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.280595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.280758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.280958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.281028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.349 qpair failed and we were unable to recover it. 00:34:19.349 [2024-05-15 16:55:26.281187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.349 [2024-05-15 16:55:26.281312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.281365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.281563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.281705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.281731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.281950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.282153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.282181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.282360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.282480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.282523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.282657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.282809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.282837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.282990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.283140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.283171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.283354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.283499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.283541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.283693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.283837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.283866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.284059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.284228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.284272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.284412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.284551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.284577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.284767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.284925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.284954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.285109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.285284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.285314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.285473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.285598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.285628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.285745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.285896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.285924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.286104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.286291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.286320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.286482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.286634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.286660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.286779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.286953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.286982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.287134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.287290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.287317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.287459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.287600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.287626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.287813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.288010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.288037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.288175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.288349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.288376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.288513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.288699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.288728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.288908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.289063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.289092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.289282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.289417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.289445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.289606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.289770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.289812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.289963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.290140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.290168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.290359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.290529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.290557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.290724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.290903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.290931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.291075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.291201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.291237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.350 [2024-05-15 16:55:26.291398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.291506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.350 [2024-05-15 16:55:26.291532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.350 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.291672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.291839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.291865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.292002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.292193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.292236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.292395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.292571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.292600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.292766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.292905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.292931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.293088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.293268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.293298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.293422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.293549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.293578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.293741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.293887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.293913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.294054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.294237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.294263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.294443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.294584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.294610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.294776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.294918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.294944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.295107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.295252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.295280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.295418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.295559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.295586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.295766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.295927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.295956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.296109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.296266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.296296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.296458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.296659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.296685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.296850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.296959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.297003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.297162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.297330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.297357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.297471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.297631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.297661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.297843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.297984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.298027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.298193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.298322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.298349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.298499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.298747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.298776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.298905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.299077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.299120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.299287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.299442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.299470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.299657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.299865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.299911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.351 qpair failed and we were unable to recover it. 00:34:19.351 [2024-05-15 16:55:26.300087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.300251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.351 [2024-05-15 16:55:26.300277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.300440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.300594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.300623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.300755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.300913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.300942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.301139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.301289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.301316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.301456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.301617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.301643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.301805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.301964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.301992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.302174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.302340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.302370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.302549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.302737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.302785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.302915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.303068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.303097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.303257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.303402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.303432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.303599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.303764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.303792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.303923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.304080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.304106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.304249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.304385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.304427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.304554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.304690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.304716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.304883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.305047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.305076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.305245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.305390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.305415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.305610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.305789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.305851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.306030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.306156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.306185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.306329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.306472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.306498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.306685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.306828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.306861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.307014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.307172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.307198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.307328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.307436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.307462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.307652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.307815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.307844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.307975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.308162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.308187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.308337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.308478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.308503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.308641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.308744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.308769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.308934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.309065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.309094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.309285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.309438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.309466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.309660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.309792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.309817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.310015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.310178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.310204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.352 [2024-05-15 16:55:26.310387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.310497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.352 [2024-05-15 16:55:26.310523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.352 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.310684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.310820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.310845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.310989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.311155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.311180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.311319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.311456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.311482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.311617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.311762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.311788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.311931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.312061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.312089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.312249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.312369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.312394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.312561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.312755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.312781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.312925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.313089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.313118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.313276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.313413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.313455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.313638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.313791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.313819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.313965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.314132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.314159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.314329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.314442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.314468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.314629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.314783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.314813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.314992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.315161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.315187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.315355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.315468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.315494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.315636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.315802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.315828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.315987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.316117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.316146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.316290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.316428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.316455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.316659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.316796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.316822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.316992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.317304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.317585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.317854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.317982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.318012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.318177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.318384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.318415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.318546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.318674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.318702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.318834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.318994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.319022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.319210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.319358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.319385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.319526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.319670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.319697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.319841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.320027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.320056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.353 qpair failed and we were unable to recover it. 00:34:19.353 [2024-05-15 16:55:26.320246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.320370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.353 [2024-05-15 16:55:26.320397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.320566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.320757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.320783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.320897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.321010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.321037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.321202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.321388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.321415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.321567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.321732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.321760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.321943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.322095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.322124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.322287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.322455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.322498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.322622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.322801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.322830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.322986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.323155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.323181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.323303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.323448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.323476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.323619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.323800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.323830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.324012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.324163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.324191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.324351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.324490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.324517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.324714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.324883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.324908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.325028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.325131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.325157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.325311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.325453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.325496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.325676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.325853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.325881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.326034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.326189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.326225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.326377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.326489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.326515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.326691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.326815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.326844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.327004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.327141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.327167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.327300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.327408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.327434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.327553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.327715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.327744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.327923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.328075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.328104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.328280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.328425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.328452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.328647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.328840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.328892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.329044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.329226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.329256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.329410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.329542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.329568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.329732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.329908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.329937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.330106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.330271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.330298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.354 qpair failed and we were unable to recover it. 00:34:19.354 [2024-05-15 16:55:26.330449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.330559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.354 [2024-05-15 16:55:26.330585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.330747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.330899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.330928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.331052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.331245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.331275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.331438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.331579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.331605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.331767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.331948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.331977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.332130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.332290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.332320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.332483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.332622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.332669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.332829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.333010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.333038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.333169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.333312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.333339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.333481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.333642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.333668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.333861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.334007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.334033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.334175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.334380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.334410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.334545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.334720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.334761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.334952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.335229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.335566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.335837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.335975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.336001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.336136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.336298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.336328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.336494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.336661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.336686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.336853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.337189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.337506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.337790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.337980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.338119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.338268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.338294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.338429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.338578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.338620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.338772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.338924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.338952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.339070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.339213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.355 [2024-05-15 16:55:26.339257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.355 qpair failed and we were unable to recover it. 00:34:19.355 [2024-05-15 16:55:26.339411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.339518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.339543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.339709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.339863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.339891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.340068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.340303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.340333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.340481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.340622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.340648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.340858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.340996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.341165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.341490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.341769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.341934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.342090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.342286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.342313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.342439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.342569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.342594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.342795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.342936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.342962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.343097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.343283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.343312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.343481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.343628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.343654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.343815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.343957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.343983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.344169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.344368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.344395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.344521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.344665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.344692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.344831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.345007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.345036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.345187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.345321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.345350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.345495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.345631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.345657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.345828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.346014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.346040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.346181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.346360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.346389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.346555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.346722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.346748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.346889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.347008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.347034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.347177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.347361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.347392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.347559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.347701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.347727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.347897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.348050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.348079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.348209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.348402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.348431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.348568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.348732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.348758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.348958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.349100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.349126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.349261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.349384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.356 [2024-05-15 16:55:26.349413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.356 qpair failed and we were unable to recover it. 00:34:19.356 [2024-05-15 16:55:26.349580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.349723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.349749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.349918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.350075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.350104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.350285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.350420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.350446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.350591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.350733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.350759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.350943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.351122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.351150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.351332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.351489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.351518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.351675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.351844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.351869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.352010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.352189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.352224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.352383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.352540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.352568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.352703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.352854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.352881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.353051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.353188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.353221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.353339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.353539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.353621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.353803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.353958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.353989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.354122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.354289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.354316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.354438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.354544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.354570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.354691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.354836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.354863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.355063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.355199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.355251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.355417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.355547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.355576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.355733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.355872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.355898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.356035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.356188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.356223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.356402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.356564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.356623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.356791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.356928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.356954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.357116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.357261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.357288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.357431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.357611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.357639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.357827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.357968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.357994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.358137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.358312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.358343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.358478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.358629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.358655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.358795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.358937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.358963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.359105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.359277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.359320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.357 [2024-05-15 16:55:26.359477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.359633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.357 [2024-05-15 16:55:26.359661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.357 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.359843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.359983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.360024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.360152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.360312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.360341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.360494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.360652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.360680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.360843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.360987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.361127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.361429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.361790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.361976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.362145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.362296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.362323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.362492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.362656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.362685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.362846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.362987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.363160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.363499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.363788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.363977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.364166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.364330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.364357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.364526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.364645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.364674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.364855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.365023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.365063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.365226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.365383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.365412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.365539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.365689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.365719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.365888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.366025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.366066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.366227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.366345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.366374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.366558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.366703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.366732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.366876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.367017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.367044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.367235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.367422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.367452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.367580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.367762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.367791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.367957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.368122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.368148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.368324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.368488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.368514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.368717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.368856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.368882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.369036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.369185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.369214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.369381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.369565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.369594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.358 qpair failed and we were unable to recover it. 00:34:19.358 [2024-05-15 16:55:26.369784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.369924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.358 [2024-05-15 16:55:26.369951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.370087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.370208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.370241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.370381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.370491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.370518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.370626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.370766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.370792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.370904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.371170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.371482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.371826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.371974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.372016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.372169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.372368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.372398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.372554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.372718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.372744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.372884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.373047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.373090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.373253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.373363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.373390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.373563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.373734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.373763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.373906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.374042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.374068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.374227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.374382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.374410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.374563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.374738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.374767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.374931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.375065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.375091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.375288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.375458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.375484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.375650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.375812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.375841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.375985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.376124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.376151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.376266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.376405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.376432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.376609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.376790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.376817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.376980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.377137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.377165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.377303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.377424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.377451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.377643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.377798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.377828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.377993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.378126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.378152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.378295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.378461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.378487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.378610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.378755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.378785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.359 qpair failed and we were unable to recover it. 00:34:19.359 [2024-05-15 16:55:26.378897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.379031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.359 [2024-05-15 16:55:26.379057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.379232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.379418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.379444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.379577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.379690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.379716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.379855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.379966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.379992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.380185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.380372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.380402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.380551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.380735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.380761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.380904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.381064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.381093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.381263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.381409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.381435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.381603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.381790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.360 [2024-05-15 16:55:26.381816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.360 qpair failed and we were unable to recover it. 00:34:19.360 [2024-05-15 16:55:26.381950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.382114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.382158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.382325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.382492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.382517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.382681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.382845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.382874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.383039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.383177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.383203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.383352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.383538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.383567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.383743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.383917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.383946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.384092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.384232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.384274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.384428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.384605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.384634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.384788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.384938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.384967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.385119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.385227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.385253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.385412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.385538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.385564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.385709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.385828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.385853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.386019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.386158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.386201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.386380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.386512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.386538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.386701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.386850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.386878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.387009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.387181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.387207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.387387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.387584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.387610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.387720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.387870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.387898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.388037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.388168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.388194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.388330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.388468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.388497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.388649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.388823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.388851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.388989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.389183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.389212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.389411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.389545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.389571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.389688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.389816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.389845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.390004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.390176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.390201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.390386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.390544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.390573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.390753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.390894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.390920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.391041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.391205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.391238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.391353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.391508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.391536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.391692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.391828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.361 [2024-05-15 16:55:26.391854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.361 qpair failed and we were unable to recover it. 00:34:19.361 [2024-05-15 16:55:26.391966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.392129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.392155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.392313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.392488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.392514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.392629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.392784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.392812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.392973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.393088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.393115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.393294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.393441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.393468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.393618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.393791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.393820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.393974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.394137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.394178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.394325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.394461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.394487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.394665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.394834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.394860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.395000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.395130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.395156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.395309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.395453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.395480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.395664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.395849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.395877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.396025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.396191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.396221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.396339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.396446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.396473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.396617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.396741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.396770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.396961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.397142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.397171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.397330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.397449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.397475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.397665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.397816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.397845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.398006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.398146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.398172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.398345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.398488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.398514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.398629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.398789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.398818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.399001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.399127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.399154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.399286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.399425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.399451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.399587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.399743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.399771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.399913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.400084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.400110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.400241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.400424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.400450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.400614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.400738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.400767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.400928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.401109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.401138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.401295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.401435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.401461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.362 [2024-05-15 16:55:26.401625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.401828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.362 [2024-05-15 16:55:26.401854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.362 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.401989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.402123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.402150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.402349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.402498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.402524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.402693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.402850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.402879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.403058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.403176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.403201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.403327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.403459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.403485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.403628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.403791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.403819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.403965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.404073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.404098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.404259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.404399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.404426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.404561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.404712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.404740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.404894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.405057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.405083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.405193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.405315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.405341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.405497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.405696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.405747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.405891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.406173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.406486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.406821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.406962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.407081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.407283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.407310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.407455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.407632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.407660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.407802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.407946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.407972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.408114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.408251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.408278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.408421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.408590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.408618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.408784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.408900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.408932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.409058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.409230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.409260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.409408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.409562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.409591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.409793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.409936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.409981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.410164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.410332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.410358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.410500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.410664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.410693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.410881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.411023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.411049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.411192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.411363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.411390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.363 qpair failed and we were unable to recover it. 00:34:19.363 [2024-05-15 16:55:26.411566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.363 [2024-05-15 16:55:26.411725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.411754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.411915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.412056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.412082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.412228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.412377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.412408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.412568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.412736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.412762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.412877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.413015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.413041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.413236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.413412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.413440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.413586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.413766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.413794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.413924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.414233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.414519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.414801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.414986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.415168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.415360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.415388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.415536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.415644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.415675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.415820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.415968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.416155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.416461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.416741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.416923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.417084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.417210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.417250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.417441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.417600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.417626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.417795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.417938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.417963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.418098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.418278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.418305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.418426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.418673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.418725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.418879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.419043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.419085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.419262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.419426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.419452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.419629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.419782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.419812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.420005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.420119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.420160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.420334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.420443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.420468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.420657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.420810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.420839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.421025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.421171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.421197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.421347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.421492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.364 [2024-05-15 16:55:26.421533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.364 qpair failed and we were unable to recover it. 00:34:19.364 [2024-05-15 16:55:26.421659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.421874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.421935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.422138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.422310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.422338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.422483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.422624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.422650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.422949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.423127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.423155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.423322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.423433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.423461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.423618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.423764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.423793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.423987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.424118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.424144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.424288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.424421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.424463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.424641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.424820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.424848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.425001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.425178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.425207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.425355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.425495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.425522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.425640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.425782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.425808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.425955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.426097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.426126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.426281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.426447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.426473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.426615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.426730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.426756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.426901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.427037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.427066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.427253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.427394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.427421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.427554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.427719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.427746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.427866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.428041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.428067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.428206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.428361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.428387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.428578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.428694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.428721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.428863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.429028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.429054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.429228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.429409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.429435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.429618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.429763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.429789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.429952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.430098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.430127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.430266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.430388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.430414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.365 qpair failed and we were unable to recover it. 00:34:19.365 [2024-05-15 16:55:26.430555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.365 [2024-05-15 16:55:26.430706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.430736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.430916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.431058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.431086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.431253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.431395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.431436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.431594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.431776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.431805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.431936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.432088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.432117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.432278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.432417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.432443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.432611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.432786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.432815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.432973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.433106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.433135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.433272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.433376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.433402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.433554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.433692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.433719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.433893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.434084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.434110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.434251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.434420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.434445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.434587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.434734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.434763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.434924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.435051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.435080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.435210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.435353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.435379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.435514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.435696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.435725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.435915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.436070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.436099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.436246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.436362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.436388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.436551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.436805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.436864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.437040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.437174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.437204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.437375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.437519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.437561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.437722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.437910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.437935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.438120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.438256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.438286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.438453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.438635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.438664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.438815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.438991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.439019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.439178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.439364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.439394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.439535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.439699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.439724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.439923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.440113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.440140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.440269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.440454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.440483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.440622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.440766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.366 [2024-05-15 16:55:26.440793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.366 qpair failed and we were unable to recover it. 00:34:19.366 [2024-05-15 16:55:26.440920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.441064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.441093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.441249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.441414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.441440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.441604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.441764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.441792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.441917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.442111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.442137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.442307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.442423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.442450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.442567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.442710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.442736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.442877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.443095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.443121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.443320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.443452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.443481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.443669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.443812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.443838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.444026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.444204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.444240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.444394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.444552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.444581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.444730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.444855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.444881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.445026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.445200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.445235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.445401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.445620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.445674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.445850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.445993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.446037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.446190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.446374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.446404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.446549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.446716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.446742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.446941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.447063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.447103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.447289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.447423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.447451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.447605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.447763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.447792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.447961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.448138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.448167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.448346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.448490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.448516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.448684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.448859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.448888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.449046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.449152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.449178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.449335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.449479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.449504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.449623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.449758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.449786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.449950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.450114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.450156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.450284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.450452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.450478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.450613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.450863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.450922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.367 [2024-05-15 16:55:26.451088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.451235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.367 [2024-05-15 16:55:26.451262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.367 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.451407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.451674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.451730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.451890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.452071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.452096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.452210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.452354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.452380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.452549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.452727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.452756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.452914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.453066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.453095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.453245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.453416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.453442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.453629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.453871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.453923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.454115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.454253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.454283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.454417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.454547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.454573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.454744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.454925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.454953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.455137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.455301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.455328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.455439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.455552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.455578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.455737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.455896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.455922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.456058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.456248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.456278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.456413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.456556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.456582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.456747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.456916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.456944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.457070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.457244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.457271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.457437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.457595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.457623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.457744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.457899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.457927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.458108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.458271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.458297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.458438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.458555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.458581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.458747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.458899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.458928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.459057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.459284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.459315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.459476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.459640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.459684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.459829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.459947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.459977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.460130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.460291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.460321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.460498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.460639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.460666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.460864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.460978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.461172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.461515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.368 qpair failed and we were unable to recover it. 00:34:19.368 [2024-05-15 16:55:26.461783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.368 [2024-05-15 16:55:26.461989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.462148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.462281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.462311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.462458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.462600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.462626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.462765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.462920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.462949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.463069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.463287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.463317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.463502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.463642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.463687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.463874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.464222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.464517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.464827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.464997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.465148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.465323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.465353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.465502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.465616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.465643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.465807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.465924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.465952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.466115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.466279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.466305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.466445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.466583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.466609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.466754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.466934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.466963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.467128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.467260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.467289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.467459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.467616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.467645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.467825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.467963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.467990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.468153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.468343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.468370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.468487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.468655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.468681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.468874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.469239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.469522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.469793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.469968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.470163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.470349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.470378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.470567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.470670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.470696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.470863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.471016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.471044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.369 [2024-05-15 16:55:26.471179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.471379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.369 [2024-05-15 16:55:26.471410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.369 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.471571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.471687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.471712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.471882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.472049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.472077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.472246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.472437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.472463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.472607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.472774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.472815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.472943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.473095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.473123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.473275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.473472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.473501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.473693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.473836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.473861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.473977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.474137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.474165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.474321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.474477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.474505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.474664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.474801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.474826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.475021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.475166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.475194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.475356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.475469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.475512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.475697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.475860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.475886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.476031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.476209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.476243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.476366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.476515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.476541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.476675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.476838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.476864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.476998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.477166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.477194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.477340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.477488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.477517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.477710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.477848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.477873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.478021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.478161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.478186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.478333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.478463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.478496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.478649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.478764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.478789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.478905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.479062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.479091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.479243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.479381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.479407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.479542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.479679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.479705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.479846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.480001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.480029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.480253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.480415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.480440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.480593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.480727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.480770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.480897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.481017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.481047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.481181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.481342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.481371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.370 [2024-05-15 16:55:26.481505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.481630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.370 [2024-05-15 16:55:26.481655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.370 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.481766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.481920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.481949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.482092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.482250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.482280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.482431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.482587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.482616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.482824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.482976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.483009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.483160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.483367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.483396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.483543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.483739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.483766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.483934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.484108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.484138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.484271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.484458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.484495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.484641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.484840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.484870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.485019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.485165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.485194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.485370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.485536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.485566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.485720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.485879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.485909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.486065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.486222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.486251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.486412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.486546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.486572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.486759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.486926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.486968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.487094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.487222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.487251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.487443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.487639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.487668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.487824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.487978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.488008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.488179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.488366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.488392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.488549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.488737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.488766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.488892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.489196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.489498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.489823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.489992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.490136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.490316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.490346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.490489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.490599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.490626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.490764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.490896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.490922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.491039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.491204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.491240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.491415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.491534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.491560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.491736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.491881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.491910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.492075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.492243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.492278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.492436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.492606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.492631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.492821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.492968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.492994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.371 qpair failed and we were unable to recover it. 00:34:19.371 [2024-05-15 16:55:26.493123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.371 [2024-05-15 16:55:26.493273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.493300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.493440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.493560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.493586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.493745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.493914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.493941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.494137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.494311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.494341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.494501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.494662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.494692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.494858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.495024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.495051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.495263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.495385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.495411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.495529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.495697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.495726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.495868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.496011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.496037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.496213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.496339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.496365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.496549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.496717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.496743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.496867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.497163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.497442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.497720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.497908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.498075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.498192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.498226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.498387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.498519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.498551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.498704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.498832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.498861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.499051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.499188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.499221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.499348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.499461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.499499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.499663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.499786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.499815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.499948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.500086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.500115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.500313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.500448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.500487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.500632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.500788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.500815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.500927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.501048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.501075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.501250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.501397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.501423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.501540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.501698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.501726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.501883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.502048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.502074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.502255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.502394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.502422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.502552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.502718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.502748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.502906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.503209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.503542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.503840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.503976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.504002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.372 qpair failed and we were unable to recover it. 00:34:19.372 [2024-05-15 16:55:26.504162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.372 [2024-05-15 16:55:26.504330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.504361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.504496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.504660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.504689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.504856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.505222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.505534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.505837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.505980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.506150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.506444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.506794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.506958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.507133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.507275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.507304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.507432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.507599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.507625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.507783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.507949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.507975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.508114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.508262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.508292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.508434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.508616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.508643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.508819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.508955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.508981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.509165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.509307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.509335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.509494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.509632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.509658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.509862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.509976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.510200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.510469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.510825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.510993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.511104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.511232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.511274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.511416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.511576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.511607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.511793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.511907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.511934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.512078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.512242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.512275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.512421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.512591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.512621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.512802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.512992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.513021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.513170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.513340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.513370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.513523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.513697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.513727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.513862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.514021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.514052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.373 qpair failed and we were unable to recover it. 00:34:19.373 [2024-05-15 16:55:26.514194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.373 [2024-05-15 16:55:26.514333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.514360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.514519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.514700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.514734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.514963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.515108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.515135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.515315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.515448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.515479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.515656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.515773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.515799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.515933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.516051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.516077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.516281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.516420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.516447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.516574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.516711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.516741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.516882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.517101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.517128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.517297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.517421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.517450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.517581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.517735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.517764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.517895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.518194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.518521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.518857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.518997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.519024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.519176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.519286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.519312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.519494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.519646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.519675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.519861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.519996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.520039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.520195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.520360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.520390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.520548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.520703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.520733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.520897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.521007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.521033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.521210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.521388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.521422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.521576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.521768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.521794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.521932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.522174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.522204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.522397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.522582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.522611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.522778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.522902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.522929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.523103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.523281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.523310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.523470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.523625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.523652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.523798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.523954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.523984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.524118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.524259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.524286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.524433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.524608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.524634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.524772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.524941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.524970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.525165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.525303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.525348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.525485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.525635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.525664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.525790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.525914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.525944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.374 qpair failed and we were unable to recover it. 00:34:19.374 [2024-05-15 16:55:26.526119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.374 [2024-05-15 16:55:26.526266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.526293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.526436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.526602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.526628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.526764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.526926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.526955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.527095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.527253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.527280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.527415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.527552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.527578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.527719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.527883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.527909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.528044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.528227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.528271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.528418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.528604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.528634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.528823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.528983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.529010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.529165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.529305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.529332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.529524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.529707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.529737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.529918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.530048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.530078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.530262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.530406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.530432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.530603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.530752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.530779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.530893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.531062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.531092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.531277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.531387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.531413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.531585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.531769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.531798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.531979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.532120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.532146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.532264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.532404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.532430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.532591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.532742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.532772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.532940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.533080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.533106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.533272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.533463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.533492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.533659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.533842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.533868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.534008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.534152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.534178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.534309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.534422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.534450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.534643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.534800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.534829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.534968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.535081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.535108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.535266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.535455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.535495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.535649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.535829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.535858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.536012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.536156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.536183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.536345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.536478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.536505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.536649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.536793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.536819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.536983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.537134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.537163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.537325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.537445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.537473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.537612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.537769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.375 [2024-05-15 16:55:26.537799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.375 qpair failed and we were unable to recover it. 00:34:19.375 [2024-05-15 16:55:26.537955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.538146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.538173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.538370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.538506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.538535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.538688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.538837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.538866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.539073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.539201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.539239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.539379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.539525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.539552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.539689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.539829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.539860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.540042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.540172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.540202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.540410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.540570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.540600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.540756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.540913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.540942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.541124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.541300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.541327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.541463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.541620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.541649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.541816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.541999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.542025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.542152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.542340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.542371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.542525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.542667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.542693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.542852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.543014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.543043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.543231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.543385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.543414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.543606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.543747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.543773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.543909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.544066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.544093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.544266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.544383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.544411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.544568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.544697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.544723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.544865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.545026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.545053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.545213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.545354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.545381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.545509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.545662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.545689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.545853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.546153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.546524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.546825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.546983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.547098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.547239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.547267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.547405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.547550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.547577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.547735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.547849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.547876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.548038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.548153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.548180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.548328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.548478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.548521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.548691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.548862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.548893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.549016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.549156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.549184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.549343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.549462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.549488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.376 [2024-05-15 16:55:26.549655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.549791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.376 [2024-05-15 16:55:26.549817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.376 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.549959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.550265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.550569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.550840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.550984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.551126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.551274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.551300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.551447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.551562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.551588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.551699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.551843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.551869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.552010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.552146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.552173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.552324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.552439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.552465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.552616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.552732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.552759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.552888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.553185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.553479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.553791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.553954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.554096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.554242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.554269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.554390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.554507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.554533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.554655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.554799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.554830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.554973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.555090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.555116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.555240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.555409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.555435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.555604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.555720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.555749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.377 [2024-05-15 16:55:26.555886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.556041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.377 [2024-05-15 16:55:26.556067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.377 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.556213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.556366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.556392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.556510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.556646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.556672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.556841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.556988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.557133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.557402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.557705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.557876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.557996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.558142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.558169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.558292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.558434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.558461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.558582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.558726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.558752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.558897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.559210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.559535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.559852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.559992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.560019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.655 qpair failed and we were unable to recover it. 00:34:19.655 [2024-05-15 16:55:26.560193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.655 [2024-05-15 16:55:26.560327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.560353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.560497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.560662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.560687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.560808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.560953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.560980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.561128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.561283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.561398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.561543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.561569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.561687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.561829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.561855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.561998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.562257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.562536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.562821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.562991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.563102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.563250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.563278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.563417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.563540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.563566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.563709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.563850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.563876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.563997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.564144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.564170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.564318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.564483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.564509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.564629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.564774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.564800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.564911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.565225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.565514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.565856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.565999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.566135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.566464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.566769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.566963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.567103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.567264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.567290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.567399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.567545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.567571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.567709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.567875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.567901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.568017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.568160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.568196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.568327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.568465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.568491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.568659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.568827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.568853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.569011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.569150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.569176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.656 qpair failed and we were unable to recover it. 00:34:19.656 [2024-05-15 16:55:26.569327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.656 [2024-05-15 16:55:26.569488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.569514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.569633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.569754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.569780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.569899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.570162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.570470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.570755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.570943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.571058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.571196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.571234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.571358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.571500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.571526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.571667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.571782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.571808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.571970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.572136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.572162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.572287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.572460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.572486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.572631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.572774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.572800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.572915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.573062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.573087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.573205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.573335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.573367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.573508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.573677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.573703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.573841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.574178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.574486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.574805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.574993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.575132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.575268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.575310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.575477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.575596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.575622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.575764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.575885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.575911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.576078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.576255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.576282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.576425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.576540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.576566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.576736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.576877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.576905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.577033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.577172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.577198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.577383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.577520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.577546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.577685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.577822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.577848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.577989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.578150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.578176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.578321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.578467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.657 [2024-05-15 16:55:26.578492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.657 qpair failed and we were unable to recover it. 00:34:19.657 [2024-05-15 16:55:26.578632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.578776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.578802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.578939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.579244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.579555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.579827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.579960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.580104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.580206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.580237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.580376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.580518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.580545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.580662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.580775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.580801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.580911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.581209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.581519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.581829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.581970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.582131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.582293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.582320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.582436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.582611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.582636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.582800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.582947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.582973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.583136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.583276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.583303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.583447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.583572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.583598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.583708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.583820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.583846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.584011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.584176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.584203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.584360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.584501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.584527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.584636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.584777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.584805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.584952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.585239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.585552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.585847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.585989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.586157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.586490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.658 [2024-05-15 16:55:26.586758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.658 [2024-05-15 16:55:26.586946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.658 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.587062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.587192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.587224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.587347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.587464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.587491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.587658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.587796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.587822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.587965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.588081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.588107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.588248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.588383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.588410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.588554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.588718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.588744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.588894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.589212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.589552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.589807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.589995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.590138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.590307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.590334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.590454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.590565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.590591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.590753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.590891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.590917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.591063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.591175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.591201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.591326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.591470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.591496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.591644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.591784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.591810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.591952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.592090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.592116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.592300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.592421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.592446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.592590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.592732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.592758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.592895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.593181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.593494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.593836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.593978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.594092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.594238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.594265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.594408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.594557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.594583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.594693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.594830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.594856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.659 [2024-05-15 16:55:26.594968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.595103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.659 [2024-05-15 16:55:26.595129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.659 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.595252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.595398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.595424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.595560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.595698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.595724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.595861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.595980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.596145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.596452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.596758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.596949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.597092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.597199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.597230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.597371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.597490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.597516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.597630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.597761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.597787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.597904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.598240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.598530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.598848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.598985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.599010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.660 qpair failed and we were unable to recover it. 00:34:19.660 [2024-05-15 16:55:26.599147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.599250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.660 [2024-05-15 16:55:26.599276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.599391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.599509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.599535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.599676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.599817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.599843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.599984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.600125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.600151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.600302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.600441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.600467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.600606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.600741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.600767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.600933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.601184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.601522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.601796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.601938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.602076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.602212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.602262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.602384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.602520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.602546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.602680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.602792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.602818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.602973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.603137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.603163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.603314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.603433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.603459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.603576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.603693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.603719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.603887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.604196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.604542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.604845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.604992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.605156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.605490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.605821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.605983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.606095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.606237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.606263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.606403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.606543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.606570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.606693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.606870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.606896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.607012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.607126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.607153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.607273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.607391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.661 [2024-05-15 16:55:26.607417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-05-15 16:55:26.607543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.607706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.607732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.607898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.608152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.608486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.608791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.608953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.609104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.609226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.609252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.609373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.609510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.609535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.609674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.609791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.609819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.609967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.610105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.610131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.610272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.610416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.610441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.610563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.610728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.610754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.610905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.611074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.611099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.611266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.611433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.611459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.611574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.611739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.611765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.611882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.612182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.612488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.612760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.612926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.613063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.613202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.613231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.613355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.613497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.613523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.613667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.613835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.613860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.614001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.614303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.614554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.614855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.614988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.615156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.615263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.615289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.615432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.615593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.615618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.615752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.615917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.662 [2024-05-15 16:55:26.615942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-05-15 16:55:26.616112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.616230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.616256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.616397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.616505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.616530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.616666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.616787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.616812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.616928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.617094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.617119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.617271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.617435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.617461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.617580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.617742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.617767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.617905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.618212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.618540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.618820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.618961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.619097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.619265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.619291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.619409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.619553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.619578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.619686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.619823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.619855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.619991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.620155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.620180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.620335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.620483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.620508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.620644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.620756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.620782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.620950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.621249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.621503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.621774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.621912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.622076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.622177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.622202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.622371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.622495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.622520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.622670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.622784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.622809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.622922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.623252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.623557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.623807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.623936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.624101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.624243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.624269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.624381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.624521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.624546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.663 [2024-05-15 16:55:26.624687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.663 [2024-05-15 16:55:26.624826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.624851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.625022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.625184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.625209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.625360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.625504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.625529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.625639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.625776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.625801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.625943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.626080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.626105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.626254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.626417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.626442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.626558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.626733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.626758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.626918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.627247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.627540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.627827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.627998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.628186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.628449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.628733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.628903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.629050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.629190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.629222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.629391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.629530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.629555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.629717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.629883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.629908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.630030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.630165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.630190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.630354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.630520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.630545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.630687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.630829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.630854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.630997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.631129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.631154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.631304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.631419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.631445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.631564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.631673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.631698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.631859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.632166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.632457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.632727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.632885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.633023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.633169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.633194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.633342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.633461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.633486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.633653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.633793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.633819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.664 qpair failed and we were unable to recover it. 00:34:19.664 [2024-05-15 16:55:26.633963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.664 [2024-05-15 16:55:26.634082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.634107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.634270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.634412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.634438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.634579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.634714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.634739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.634883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.635223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.635531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.635832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.635997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.636133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.636263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.636291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.636406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.636558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.636584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.636730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.636841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.636867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.637004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.637150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.637177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.637290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.637408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.637433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.637569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.637684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.637709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.637879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.638213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.638477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.638778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.638948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.639072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.639206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.639237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.639378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.639489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.639514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.639631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.639774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.639799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.639962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.640095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.640121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.640262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.640407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.640432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.640571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.640739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.640764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.640876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.641050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.641075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.641240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.641352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.641377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.665 qpair failed and we were unable to recover it. 00:34:19.665 [2024-05-15 16:55:26.641549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.665 [2024-05-15 16:55:26.641694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.641720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.641834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.641953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.641978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.642121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.642233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.642259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.642397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.642531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.642555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.642719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.642857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.642884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.643028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.643189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.643219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.643360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.643502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.643527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.643691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.643826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.643851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.643967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.644244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.644500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.644780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.644948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.645085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.645253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.645279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.645445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.645564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.645590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.645738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.645863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.645889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.646006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.646148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.646173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.646349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.646455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.646481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.646598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.646739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.646765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.646904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.647176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.647473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.647827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.647988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.648180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.648478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.648779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.648966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.649106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.649271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.649297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.649412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.649555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.649580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.649717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.649829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.649854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.650010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.650156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.650181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.650327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.650489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.650514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.666 qpair failed and we were unable to recover it. 00:34:19.666 [2024-05-15 16:55:26.650653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.666 [2024-05-15 16:55:26.650768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.650793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.650928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.651068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.651095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.651209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.651381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.651407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.651576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.651691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.651716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.651850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.652157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.652466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.652747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.652933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.653052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.653161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.653185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.653328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.653467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.653492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.653636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.653778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.653807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.653971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.654122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.654147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.654315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.654457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.654482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.654596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.654729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.654755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.654894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.655197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.655487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.655793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.655958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.656100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.656250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.656276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.656451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.656584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.656609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.656742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.656906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.656931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.657082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.657247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.657273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.657398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.657530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.657555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.657709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.657842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.657867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.658005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.658113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.658138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.658283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.658450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.658475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.658618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.658758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.658784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.658946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.659083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.659109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.659222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.659382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.659408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.667 [2024-05-15 16:55:26.659546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.659687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.667 [2024-05-15 16:55:26.659713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.667 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.659838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.659953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.659978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.660127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.660265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.660291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.660411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.660527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.660552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.660688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.660811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.660836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.660946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.661240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.661528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.661790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.661933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.662082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.662193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.662243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.662412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.662553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.662578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.662711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.662849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.662875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.663002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.663136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.663161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.663287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.663427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.663453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.663595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.663729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.663754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.663900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.664233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.664492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.664794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.664959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.665126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.665262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.665288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.665426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.665535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.665560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.665699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.665809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.665835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.665973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.666096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.666122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.666250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.666399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.666425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.666569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.666721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.666747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.666909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.667246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.667514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.667871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.667984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.668009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.668146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.668276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.668303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.668 [2024-05-15 16:55:26.668423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.668533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.668 [2024-05-15 16:55:26.668559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.668 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.668671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.668810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.668835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.668972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.669256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.669554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.669807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.669944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.670062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.670199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.670248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.670392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.670505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.670530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.670677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.670839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.670864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.671005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.671148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.671173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.671328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.671469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.671494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.671635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.671748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.671773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.671914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.672227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.672536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.672816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.672951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.673098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.673235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.673261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.673402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.673562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.673588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.673748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.673888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.673915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.674026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.674171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.674196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.674405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.674590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.674618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.674795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.674962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.674988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.675156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.675279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.675307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.675432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.675577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.675603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.675749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.675892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.675919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.676044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.676162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.676187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.669 [2024-05-15 16:55:26.676347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.676526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.669 [2024-05-15 16:55:26.676552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.669 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.676692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.676810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.676835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.676955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.677227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.677505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.677782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.677953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.678119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.678238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.678265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.678406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.678571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.678596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.678730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.678887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.678913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.679026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.679173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.679199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.679370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.679488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.679515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.679658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.679792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.679818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.679980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.680282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.680577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.680858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.680987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.681132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.681243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.681269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.681410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.681532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.681558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.681701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.681819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.681844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.681959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.682260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.682544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.682842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.682978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.683096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.683240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.683265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.683402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.683539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.683565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.683740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.683877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.683903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.684047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.684212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.684242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.684384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.684522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.684548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.684728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.684892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.684918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.685052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.685191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.685223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.670 [2024-05-15 16:55:26.685368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.685470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.670 [2024-05-15 16:55:26.685496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.670 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.685612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.685722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.685748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.685914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.686230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.686542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.686809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.686971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.687089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.687234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.687261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.687401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.687517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.687546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.687651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.687759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.687785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.687950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.688228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.688558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.688845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.688983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.689126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.689485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.689779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.689918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.690045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.690188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.690213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.690375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.690518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.690543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.690666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.690829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.690855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.690966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.691103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.691129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.691271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.691407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.691433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.691571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.691719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.691744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.691885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.692222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.692558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.692838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.692989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.693163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.693276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.693302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.693448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.693559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.693584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.693703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.693867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.693892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.694037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.694170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.694196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.671 qpair failed and we were unable to recover it. 00:34:19.671 [2024-05-15 16:55:26.694314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.671 [2024-05-15 16:55:26.694457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.694482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.694647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.694778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.694803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.694913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.695195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.695528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.695797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.695955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.696092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.696235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.696262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.696412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.696529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.696554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.696702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.696838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.696864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.696996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.697144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.697170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.697318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.697456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.697481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.697618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.697734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.697760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.697900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.698187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.698534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.698821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.698957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.699098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.699262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.699288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.699432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.699570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.699596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.699719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.699869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.699894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.700010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.700150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.700175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.700314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.700487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.700513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.700626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.700735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.700761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.700925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.701065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.701090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.701258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.701370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.701395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.701539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.701702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.701728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.701869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.702006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.702032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.702148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.702310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.702336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.672 [2024-05-15 16:55:26.702473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.702615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.672 [2024-05-15 16:55:26.702640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.672 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.702805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.702945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.702975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.703113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.703228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.703254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.703365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.703514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.703539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.703656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.703768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.703795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.703934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.704065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.704090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.704254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.704399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.704424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.704558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.704702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.704727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.704865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.705168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.705472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.705779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.705939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.706065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.706204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.706242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.706385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.706505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.706532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.706650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.706789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.706814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.706960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.707123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.707148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.707293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.707403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.707429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.707572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.707716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.707741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.707914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.708196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.708512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.708848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.708984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.709159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.709462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.709783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.709972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.710091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.710239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.710266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.710407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.710545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.710571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.710708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.710837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.710862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.711001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.711167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.711193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.673 qpair failed and we were unable to recover it. 00:34:19.673 [2024-05-15 16:55:26.711338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.711453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.673 [2024-05-15 16:55:26.711479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.711645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.711807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.711832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.711978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.712267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.712539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.712841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.712995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.713110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.713226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.713252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.713396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.713535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.713561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.713695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.713812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.713838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.714003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.714302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.714592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.714883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.714995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.715136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.715481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.715760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.715907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.716027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.716133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.716159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.716302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.716465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.716490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.716630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.716801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.716827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.716972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.717134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.717159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.717305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.717423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.717448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.717590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.717723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.717749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.717890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.718230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.718522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.674 [2024-05-15 16:55:26.718788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.674 [2024-05-15 16:55:26.718991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.674 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.719104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.719240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.719266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.719440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.719580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.719605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.719745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.719876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.719901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.720068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.720207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.720238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.720415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.720531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.720556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.720695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.720805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.720830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.720996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.721136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.721162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.721305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.721443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.721472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.721613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.721778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.721804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.721944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.722203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.722500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.722785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.722951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.723093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.723232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.723258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.723428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.723564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.723589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.723728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.723862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.723887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.724036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.724175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.724201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.724371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.724512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.724538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.724655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.724793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.724819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.724963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.725293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.725550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.725848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.725989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.726145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.726441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.726737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.726909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.727017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.727152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.727177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.727327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.727491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.727517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.675 [2024-05-15 16:55:26.727687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.727798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.675 [2024-05-15 16:55:26.727823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.675 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.727964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.728100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.728125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.728298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.728435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.728462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.728638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.728779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.728805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.728940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.729111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.729136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.729279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.729425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.729450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.729590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.729729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.729755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.729894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.730178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.730446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.730729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.730897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.731062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.731170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.731196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.731338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.731483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.731509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.731653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.731817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.731842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.731978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.732288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.732550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.732862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.732978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.733148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.733494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.733778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.733943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.734055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.734196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.734246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.734394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.734533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.734558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.734695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.734813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.734838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.734954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.735305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.735558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.735840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.735978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.736142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.736306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.736332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.736473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.736645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.736670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.676 [2024-05-15 16:55:26.736810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.736950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.676 [2024-05-15 16:55:26.736979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.676 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.737087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.737191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.737222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.737360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.737504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.737529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.737697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.737810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.737836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.737974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.738110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.738136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.738299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.738440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.738465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.738629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.738747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.738772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.738889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.739174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.739487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.739768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.739907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.740047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.740156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.740182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.740343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.740453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.740478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.740612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.740726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.740753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.740898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.741208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.741502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.741782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.741920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.742035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.742141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.742167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.742304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.742451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.742476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.742592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.742739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.742765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.742913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.743201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.743564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.743846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.743981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.744153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.744440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.744734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.744877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.744993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.745142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.745167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.745338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.745477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.745503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.677 [2024-05-15 16:55:26.745616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.745779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.677 [2024-05-15 16:55:26.745804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.677 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.745916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.746173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.746461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.746767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.746907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.747054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.747172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.747199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.747327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.747434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.747460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.747577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.747746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.747772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.747888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.748167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.748524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.748772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.748932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.749072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.749207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.749238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.749361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.749502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.749528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.749666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.749809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.749835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.749975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.750092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.750118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.750227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.750391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.750416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.750581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.750696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.750721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.750883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.751182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.751451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.751704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.751871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.751987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.752286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.752550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.752849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.752989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.753014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.678 qpair failed and we were unable to recover it. 00:34:19.678 [2024-05-15 16:55:26.753153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.678 [2024-05-15 16:55:26.753294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.753321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.753484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.753597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.753622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.753740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.753909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.753934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.754074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.754237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.754263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.754404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.754519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.754546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.754650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.754768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.754794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.754928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.755236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.755551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.755852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.755985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.756153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.756458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.756726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.756891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.757010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.757147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.757172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.757302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.757438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.757463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.757579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.757714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.757739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.757881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.758186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.758476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.758745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.758911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.759078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.759245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.759271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.759391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.759501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.759526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.759636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.759753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.759780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.759891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.760188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.760507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.760818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.760973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.761141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.761301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.761327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.761465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.761600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.761626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.761761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.761926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.761952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.679 qpair failed and we were unable to recover it. 00:34:19.679 [2024-05-15 16:55:26.762087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.679 [2024-05-15 16:55:26.762200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.762231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.762354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.762520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.762545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.762688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.762822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.762848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.762989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.763292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.763564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.763831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.763993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.764160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.764312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.764338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.764475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.764615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.764640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.764789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.764906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.764931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.765066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.765181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.765206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.765377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.765541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.765566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.765730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.765893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.765918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.766057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.766231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.766257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.766398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.766533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.766558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.766698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.766843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.766868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.767007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.767172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.767201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.767348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.767483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.767508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.767651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.767819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.767844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.767977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.768112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.768138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.768271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.768431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.768456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.768596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.768704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.768730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.768838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.769147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.769479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.769766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.769941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.770080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.770249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.770276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.770398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.770534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.770560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.770727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.770870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.770895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.771037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.771177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.771202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.680 qpair failed and we were unable to recover it. 00:34:19.680 [2024-05-15 16:55:26.771335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.680 [2024-05-15 16:55:26.771469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.771495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.771644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.771758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.771784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.771924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.772236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.772513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.772792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.772954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.773073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.773193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.773223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.773344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.773479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.773505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.773633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.773746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.773771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.773913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.774209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.774547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.774857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.774991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.775182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.775521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.775775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.775944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.776074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.776237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.776263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.776387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.776562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.776587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.776723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.776865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.776890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.777034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.777142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.777167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.777329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.777443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.777469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.777636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.777799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.777824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.777989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.778125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.778150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.778290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.778458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.778484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.778589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.778753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.778778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.778897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.779177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.779448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.779752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.779915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.780027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.780179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.780204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.681 qpair failed and we were unable to recover it. 00:34:19.681 [2024-05-15 16:55:26.780354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.681 [2024-05-15 16:55:26.780473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.780498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.780666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.780806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.780831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.780970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.781107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.781132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.781296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.781461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.781486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.781630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.781775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.781801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.781941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.782225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.782483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.782766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.782957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.783065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.783187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.783212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.783398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.783536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.783562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.783723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.783867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.783893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.784059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.784201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.784232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.784395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.784557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.784582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.784717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.784882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.784908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.785048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.785185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.785211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.785366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.785503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.785528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.785646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.785782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.785811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.785989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.786103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.786128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.786248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.786389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.786414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.786560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.786679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.786705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.786872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.787228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.787532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.787838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.787978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.788118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.788418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.788762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.788925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.682 qpair failed and we were unable to recover it. 00:34:19.682 [2024-05-15 16:55:26.789068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.789180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.682 [2024-05-15 16:55:26.789207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.789356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.789520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.789545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.789658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.789778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.789804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.789942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.790110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.790135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.790277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.790443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.790468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.790593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.790721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.790746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.790859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.791169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.791431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.791677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.791840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.791986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.792267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.792569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.792871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.792984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.793121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.793418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.793747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.793898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.794040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.794163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.794188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.794341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.794474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.794500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.794674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.794811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.794836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.794948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.795267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.795578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.795855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.795989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.796151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.796423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.796686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.796851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.797019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.797128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.797154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.797294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.797415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.797441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.797605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.797720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.797744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.797861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.798010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.798035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.798150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.798271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.798297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.683 qpair failed and we were unable to recover it. 00:34:19.683 [2024-05-15 16:55:26.798423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-05-15 16:55:26.798564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.798589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.798728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.798861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.798886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.799025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.799168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.799193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.799387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.799514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.799543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.799661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.799780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.799806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.799969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.800260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.800554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.800845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.800983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.801132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.801383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.801680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.801844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.801982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.802236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.802509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.802792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.802928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.803065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.803205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.803256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.803377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.803519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.803544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.803654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.803806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.803831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.804004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.804142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.804167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.804288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.804407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.804432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.804599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.804720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.804746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.804867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.805172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.805486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.805816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.805974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.806141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.806432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.806734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.806875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.807059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.807175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.807201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.807346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.807465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.807491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.807625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.807737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.807762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.684 qpair failed and we were unable to recover it. 00:34:19.684 [2024-05-15 16:55:26.807905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.684 [2024-05-15 16:55:26.808016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.808042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.808208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.808356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.808383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.808563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.808668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.808693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.808827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.808984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.809148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.809428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.809771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.809929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.810046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.810183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.810209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.810328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.810466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.810491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.810640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.810745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.810770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.810911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.811162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.811437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.811734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.811901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.812042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.812210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.812241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.812381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.812519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.812544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.812662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.812800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.812826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.812966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.813141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.813167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.813298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.813463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.813489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.813652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.813760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.813785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.813923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.814087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.814113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.814256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.814397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.814423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.814566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.814726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.814751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.814915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.815249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.815585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.815861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.815975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.816110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.816434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.816787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.816957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.817101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.817220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.817246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.817382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.817495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.817522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.817668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.817807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.817832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.817950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.818058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.818083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.818201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.818367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.818393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.685 qpair failed and we were unable to recover it. 00:34:19.685 [2024-05-15 16:55:26.818534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.685 [2024-05-15 16:55:26.818650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.818675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.818785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.818928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.818954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.819071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.819234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.819264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.819399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.819505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.819530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.819672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.819806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.819832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.819946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.820252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.820532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.820858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.820992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.821158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.821437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.821772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.821910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.822044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.822180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.822206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.822338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.822483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.822509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.822645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.822811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.822836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.822948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.823088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.823113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.823244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.823405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.823431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.823544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.823688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.823714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.823880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.824207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.824521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.824803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.824993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.825126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.825266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.825292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.825436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.825551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.825576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.825714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.825849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.825875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.826013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.826349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.826605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.826895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.826997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.827022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.827161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.827301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.827327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.827436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.827569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.827594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.686 qpair failed and we were unable to recover it. 00:34:19.686 [2024-05-15 16:55:26.827733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.686 [2024-05-15 16:55:26.827878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.827904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.828042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.828177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.828203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.828348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.828461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.828487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.828636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.828789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.828814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.828949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.829270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.829529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.829855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.829983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.830144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.830272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.830298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.830440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.830578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.830603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.830727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.830836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.830862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.831000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.831136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.831161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.831275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.831417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.831444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.831558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.831695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.831720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.831858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.832159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.832447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.832752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.832884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.833023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.833165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.833191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.833311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.833469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.833494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.833608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.833741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.833766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.833891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.834172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.834520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.834826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.834988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.835127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.835239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.835266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.835407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.835542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.835568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.835703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.835840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.835865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.835983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.836126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.836151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.836281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.836423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.836448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.836616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.836756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.836781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.836931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.837046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.837070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.837224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.837381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.837407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.837522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.837688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.837713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.837855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.838022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.838047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.687 qpair failed and we were unable to recover it. 00:34:19.687 [2024-05-15 16:55:26.838184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.687 [2024-05-15 16:55:26.838328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.838354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.838497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.838612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.838637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.838775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.838908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.838933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.839086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.839245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.839271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.839414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.839534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.839560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.839704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.839839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.839864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.840003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.840143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.840169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.840334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.840476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.840502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.840642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.840746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.840772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.840917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.841231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.841503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.841779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.841938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.842074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.842220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.842246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.842383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.842514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.842540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.842646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.842781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.842807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.842974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.843112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.843138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.843282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.843419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.843444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.843591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.843730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.843755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.843892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.844173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.844519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.844822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.844980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.845144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.845295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.845321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.845486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.845627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.845652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.845765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.845876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.845901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.846023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.846142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.846168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.846286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.846400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.846426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.846578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.846744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.846769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.846888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.847224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.847551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.847872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.847987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.848014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.848154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.848296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.848323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.848444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.848561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.848586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.688 qpair failed and we were unable to recover it. 00:34:19.688 [2024-05-15 16:55:26.848748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.688 [2024-05-15 16:55:26.848913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.848939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.849078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.849223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.849250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.849414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.849553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.849578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.849721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.849858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.849887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.850042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.850158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.850183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.850356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.850472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.850499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.850633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.850769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.850794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.850908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.851070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.851095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.851242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.851381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.851406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.851538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.851701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.851727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.851860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.852168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.852453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.852736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.852900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.853041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.853182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.853208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.853357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.853472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.853497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.853638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.853776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.853801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.853920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.854200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.854482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.854816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.854984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.855128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.855284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.855310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.855448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.855611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.855636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.855753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.855886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.855911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.856057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.856194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.856226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.856369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.856516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.856541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.856681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.856849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.856874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.857016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.857151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.857176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.857321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.857462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.857487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.857653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.857770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.857795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.857930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.858288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.858565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.858827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.858997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.859162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.859310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.859341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.689 qpair failed and we were unable to recover it. 00:34:19.689 [2024-05-15 16:55:26.859491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.689 [2024-05-15 16:55:26.859638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.859665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.859809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.859926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.859952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.860100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.860238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.860265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.860423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.860552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.860579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.860701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.860822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.860848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.860999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.861136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.861162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.861284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.861430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.861456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.861591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.861724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.861749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.861918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.862032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.862057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.862231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.862352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.690 [2024-05-15 16:55:26.862377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.690 qpair failed and we were unable to recover it. 00:34:19.690 [2024-05-15 16:55:26.862508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.862652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.862678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.862834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.862975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.863115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.863408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.863688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.863829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.863972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.864263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.864519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.864822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.864955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.865128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.865277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.865304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.865446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.865582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.865608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.865829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.865947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.865973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.866090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.866235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.866261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.866411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.866572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.866598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.866746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.866866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.866894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.962 qpair failed and we were unable to recover it. 00:34:19.962 [2024-05-15 16:55:26.867052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.962 [2024-05-15 16:55:26.867168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.867196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f3c000b90 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.867354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.867495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.867521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.867638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.867800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.867826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.867941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.868225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.868565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.868866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.868975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.869140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.869422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.869713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.869880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.870017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.870147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.870173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.870295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.870412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.870438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.870575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.870726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.870752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.870895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.871043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.871068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.871213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.871376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.871406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.871546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.871711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.871736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.871852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.872192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.872528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.872833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.872994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.873158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.873443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.873744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.873880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.874009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.874146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.874171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.874313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.874446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.874472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.874618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.874752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.874778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.874921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.875199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.875451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.875730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.963 [2024-05-15 16:55:26.875874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.963 qpair failed and we were unable to recover it. 00:34:19.963 [2024-05-15 16:55:26.875982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.876125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.876150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.876294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.876430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.876566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.876711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.876736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.876876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.877160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.877460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.877766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.877958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.878104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.878248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.878274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.878416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.878531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.878556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.878672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.878829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.878855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.878972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.879111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.879136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.879310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.879440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.879465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.879611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.879750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.879776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.879895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.880156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.880470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.880782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.880943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.881060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.881197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.881230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.881351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.881489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.881514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.881633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.881776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.881802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.881941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.882230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.882481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.882787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.882953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.883096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.883208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.883238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.883355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.883472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.883497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.883634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.883769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.883794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.883928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.884090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.884116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.884254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.884396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.884422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.964 qpair failed and we were unable to recover it. 00:34:19.964 [2024-05-15 16:55:26.884592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.964 [2024-05-15 16:55:26.884723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.884749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.884855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.884974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.885169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.885474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.885777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.885942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.886093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.886232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.886259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.886392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.886532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.886562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.886726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.886864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.886889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.887052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.887221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.887247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.887370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.887517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.887542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.887690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.887796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.887821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.887967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.888225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.888512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.888787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.888920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.889043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.889172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.889198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.889324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.889458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.889487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.889595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.889759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.889784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.889951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.890249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.890500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.890774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.890939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.891062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.891205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.891235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.891377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.891505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.891530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.891669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.891833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.891859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.891991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.892155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.892180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.892311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.892425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.892450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.892601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.892748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.892774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.892937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.893045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.893070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.893213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.893328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.893353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.965 qpair failed and we were unable to recover it. 00:34:19.965 [2024-05-15 16:55:26.893492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.965 [2024-05-15 16:55:26.893632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.893658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.893799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.893941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.893966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.894109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.894227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.894253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.894394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.894506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.894531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.894685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.894825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.894850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.894970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.895117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.895142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.895319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.895467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.895492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.895612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.895777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.895803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.895928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.896207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.896521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.896822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.896985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.897129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.897260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.897286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.897427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.897576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.897601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.897767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.897911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.897936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.898100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.898232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.898258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.898371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.898484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.898509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.898632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.898785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.898811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.898949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.899064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.899089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.899246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.899362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.899389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.899563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.899681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.899708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.899848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.900169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.900456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.966 qpair failed and we were unable to recover it. 00:34:19.966 [2024-05-15 16:55:26.900779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.966 [2024-05-15 16:55:26.900946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.901061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.901234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.901260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.901426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.901543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.901568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.901680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.901822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.901847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.901988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.902130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.902155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.902295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.902467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.902493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.902627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.902759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.902784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.902926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.903248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.903520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.903847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.903981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.904118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.904454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.904768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.904933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.905075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.905207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.905237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.905391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.905536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.905561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.905676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.905792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.905819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.905995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.906131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.906157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.906330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.906468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.906494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.906656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.906793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.906819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.906991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.907126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.907152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.907262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.907403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.907429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.907540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.907706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.967 [2024-05-15 16:55:26.907732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.967 qpair failed and we were unable to recover it. 00:34:19.967 [2024-05-15 16:55:26.907862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.908177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.908469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.908829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.908992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.909128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.909241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.909267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.909401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.909583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.909608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.909748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.909891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.909916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.910058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.910225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.910252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.910364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.910472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.910498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.910639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.910803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.910828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.910966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.911268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.911526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.911834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.911999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.912165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.912306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.912332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.912451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.912589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.912614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.912755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.912916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.912942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.913107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.913243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.913269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.913440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.913550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.913577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.913720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.913882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.913908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.914060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.914202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.914232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.914352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.914515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.914541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.914684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.914829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.914855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.914991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.915128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.915153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.915272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.915428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.915453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.915586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.915727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.915753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.915894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.916149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.916486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.968 qpair failed and we were unable to recover it. 00:34:19.968 [2024-05-15 16:55:26.916791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.968 [2024-05-15 16:55:26.916931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.917072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.917181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.917206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.917361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.917507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.917533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.917673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.917785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.917811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.917951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.918211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.918508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.918773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.918906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.919017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.919117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.919142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.919286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.919430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.919456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.919582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.919750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.919776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.919887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.920190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.920510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.920791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.920977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.921129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.921242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.921268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.921412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.921528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.921554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.921698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.921808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.921833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.921975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.922266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.922554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.922832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.922973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.923139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.923254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.923280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.923399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.923510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.923535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.923676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.923842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.923867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.923985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.924099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.924125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.924271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.924392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.924418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.969 qpair failed and we were unable to recover it. 00:34:19.969 [2024-05-15 16:55:26.924559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.969 [2024-05-15 16:55:26.924700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.924725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.924867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.925162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.925465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.925761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.925931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.926049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.926213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.926243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.926374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.926511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.926537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.926705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.926848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.926873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.926989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.927299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.927597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.927862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.927997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.928188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.928479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.928755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.928943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.929107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.929223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.929248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.929364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.929512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.929537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.929655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.929773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.929798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.929934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.930242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.930573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.930855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.930997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.931111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.931269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.931295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.931432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.931569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.931595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.931712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.931828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.931855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.931996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.932137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.932163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.932278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.932442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.932468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.932612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.932726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.932752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.932920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.933054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.933079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.933242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.933405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.970 [2024-05-15 16:55:26.933430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.970 qpair failed and we were unable to recover it. 00:34:19.970 [2024-05-15 16:55:26.933563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.933727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.933753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.933897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.934188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.934498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.934805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.934982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.935124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.935258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.935284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.935397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.935543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.935572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.935693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.935862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.935888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.936013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.936318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.936583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.936850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.936996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.937160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.937449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.937784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.937970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.938120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.938257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.938283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.938419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.938557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.938587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.938732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.938877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.938902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.939040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.939180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.939205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.939381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.939521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.939547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.939665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.939782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.939809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.939938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.940046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.940073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.940249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.940365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.940391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.940536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.940701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.940727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.940868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.941017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.941042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.971 qpair failed and we were unable to recover it. 00:34:19.971 [2024-05-15 16:55:26.941184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.971 [2024-05-15 16:55:26.941333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.941359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.941504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.941637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.941663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.941780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.941893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.941918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.942050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.942192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.942224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.942332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.942497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.942523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.942640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.942805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.942831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.942968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.943118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.943143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.943283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.943425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.943451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.943616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.943772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.943797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.943923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.944242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.944555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.944858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.944993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.945158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.945438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.945741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.945880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.946019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.946128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.946153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.946272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.946441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.946466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.946622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.946756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.946781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.946926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.947202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.947518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.947807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.947951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.948088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.948257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.948284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.948420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.948533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.948559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.948723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.948890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.948916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.949033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.949180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.949205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.949329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.949438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.949463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.949632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.949796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.949821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.949930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.950065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.950091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.972 qpair failed and we were unable to recover it. 00:34:19.972 [2024-05-15 16:55:26.950262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.972 [2024-05-15 16:55:26.950379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.950404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.950542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.950650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.950676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.950812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.950951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.950976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.951141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.951280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.951306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.951445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.951578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.951723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.951857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.951883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.952002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.952253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.952551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.952811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.952966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.953119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.953253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.953279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.953401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.953538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.953563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.953703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.953847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.953879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.954000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.954163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.954189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.954312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.954426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.954451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.954593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.954738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.954763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.954905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.955742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.955772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.955920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.956238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.956522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.956841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.956984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.957129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.957274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.957301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.957421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.957561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.957587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.957735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.957876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.957902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.958009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.958174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.958199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.958346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.958489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.958515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.958658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.958799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.958824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.958944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.959059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.959084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.959226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.959391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.959417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.959581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.959752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.959777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.973 qpair failed and we were unable to recover it. 00:34:19.973 [2024-05-15 16:55:26.959921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.973 [2024-05-15 16:55:26.960062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.960245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.960549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.960851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.960990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.961136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.961251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.961278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.961396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.961538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.961565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.961707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.961869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.961895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.962061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.962178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.962204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.962320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.962435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.962462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.962585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.962729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.962759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.962926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.963044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.963069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.963235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.963387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.963413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.963554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.963720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.963746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.963895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.964229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.964520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.964801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.964983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.965147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.965280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.965306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.965445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.965559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.965584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.965697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.965816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.965841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.965961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.966282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.966554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.966863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.966977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.967114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.967401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.967715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.967905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.968056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.968165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.968191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.968343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.968483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.968509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.968644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.968781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.968806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.974 qpair failed and we were unable to recover it. 00:34:19.974 [2024-05-15 16:55:26.968973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.974 [2024-05-15 16:55:26.969108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.969134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.969259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.969428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.969454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.969563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.969703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.969730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.969841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.969982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.970157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.970440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.970754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.970893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.971008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.971166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.971191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.971321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.971462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.971488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.971652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.971791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.971817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.971920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.972065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.972090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.972233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.972341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.972366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.972487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.973257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.973286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.973436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.973587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.973613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.973745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.973889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.973915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.974060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.974227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.974253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.974374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.974517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.974543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.974679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.974820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.974845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.974987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.975281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.975530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.975790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.975984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.976123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.976266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.976292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.976427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.976584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.976609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.976771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.976915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.976941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.977066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.977176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.977202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.977335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.977458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.977484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.977654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.977761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.977787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.977963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.978108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.978132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.975 [2024-05-15 16:55:26.978276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.978393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.975 [2024-05-15 16:55:26.978418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.975 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.978551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.978699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.978724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.978885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.978990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.979184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.979436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.979755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.979891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.980020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.980159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.980184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.980334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.980452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.980476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.980602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.980738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.980763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.980904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.981200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.981491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.981736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.981918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.982083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.982190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.982221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.982340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.982501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.982526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.982641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.982785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.982810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.982930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.983207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.983471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.983781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.983945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.984083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.984190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.984221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.984361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.984483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.984507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.984620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.984725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.984750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.984889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.985231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.985487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.976 [2024-05-15 16:55:26.985823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.985984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.976 [2024-05-15 16:55:26.986009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.976 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.986121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.986286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.986313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.986435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.986572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.986597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.986735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.986873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.986898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.987033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.987295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.987592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.987846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.987996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.988134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.988269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.988294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.988431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.988541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.988566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.988712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.988847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.988871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.989159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.989183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.989310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.989427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.989453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.989604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.989769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.989794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.989934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.990112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.990136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.990273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.990422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.990446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.990586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.990721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.990746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.990918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.991194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.991510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.991845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.991984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.992151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.992439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.992712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.992847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.992962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.993255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.993554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.993867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.993976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.994138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.994459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.977 qpair failed and we were unable to recover it. 00:34:19.977 [2024-05-15 16:55:26.994759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.977 [2024-05-15 16:55:26.994922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.995032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.995175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.995199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.995352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.995464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.995500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.995648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.995758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.995783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.995899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.996177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.996432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.996730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.996866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.997007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.997306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.997551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.997817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.997983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.998121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.998237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.998264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.998377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.998492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.998517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.998637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.998777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.998801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.998932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.999239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.999499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:26.999751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:26.999882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.000029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.000305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.000563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.000865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.000999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.001115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.001265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.001290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.001433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.001548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.001573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.001683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.001818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.001843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.001974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.002282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.002596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.002854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.002993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.978 qpair failed and we were unable to recover it. 00:34:19.978 [2024-05-15 16:55:27.003152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.003296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.978 [2024-05-15 16:55:27.003326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.003443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.003549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.003574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.003714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.003835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.003860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.003999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.004112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.004138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.004292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.004452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.004476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.004600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.004733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.004758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.004897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.005226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.005523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.005809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.005954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.006124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.006245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.006281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.006429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.006604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.006630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.006795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.006910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.006935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.007059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.007176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.007201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.007342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.007457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.007482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.007628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.007731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.007756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.007905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.008021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.008047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.008185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.008335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.008360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.008472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.008604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.008629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.008796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.009534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.009564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.009727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.009878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.009902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.010072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.010182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.010207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.010366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.010483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.010507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.010648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.010755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.010779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.010921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.011198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.011527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.011843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.011984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.012135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.012408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.979 qpair failed and we were unable to recover it. 00:34:19.979 [2024-05-15 16:55:27.012718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.979 [2024-05-15 16:55:27.012868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.012977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.013260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.013510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.013792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.013932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.014047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.014186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.014210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.014361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.014481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.014506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.014640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.014777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.014801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.014940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.015192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.015489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.015803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.015945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.016089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.016206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.016237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.016349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.016481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.016518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.016661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.016822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.016847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.016960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.017228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.017495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.017752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.017915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.018058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.018176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.018201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.980 qpair failed and we were unable to recover it. 00:34:19.980 [2024-05-15 16:55:27.018372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.980 [2024-05-15 16:55:27.018490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.018515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.018697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.018811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.018840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.019003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.019149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.019173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.019295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.019429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.019453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.019602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.019738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.019764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.019877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.020175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.020457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.020782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.020946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.021081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.021225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.021251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.021397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.021516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.021540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.021679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.021796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.021821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.021956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.022092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.022117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.022268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.022386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.022411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.022530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.022698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.022723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.022865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.023384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.023757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.023896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.024034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.024319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.024579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.024849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.024984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.025120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.025268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.025293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.025415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.025568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.025593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.025701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.025843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.025867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.026011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.026157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.026181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.026327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.026460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.026484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.026659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.026771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.026796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.026935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.027049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.027074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.981 qpair failed and we were unable to recover it. 00:34:19.981 [2024-05-15 16:55:27.027209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.981 [2024-05-15 16:55:27.027332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.027357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.027496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.027638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.027663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.027804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.027920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.027945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.028061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.028206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.028237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.028351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.028486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.028510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.028628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.028768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.028792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.028938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.029223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.029479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.029758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.029892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.030056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.030193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.030236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.030360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.030480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.030504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.030650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.030794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.030819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.030937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.031212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.031518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.031836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.031991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.032109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.032301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.032326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.032438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.032580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.032604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.032713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.032873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.032898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.033036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.033170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.033195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.033342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.033491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.033516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.033629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.033770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.033798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.033963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.034114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.034138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.034268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.034403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.034428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.034579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.034713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.034738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.034877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.035152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.035467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.035781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.035963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.982 qpair failed and we were unable to recover it. 00:34:19.982 [2024-05-15 16:55:27.036076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.036234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.982 [2024-05-15 16:55:27.036259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.036376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.036489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.036513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.036660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.036821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.036851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.036972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.037280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.037589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.037871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.037981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.038143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.038471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.038804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.038963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.039105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.039231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.039257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.039371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.039502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.039532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.039656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.039799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.039823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.039995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.040111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.040136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.040274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.040420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.040445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.040597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.040736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.040761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.040928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.041208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.041511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.041759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.041948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.042093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.042241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.042278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.042444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.042599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.042624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.042763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.042901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.042926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.043052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.043167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.043192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.043344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.043485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.043510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.043653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.043759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.043783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.043898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.044180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.044537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.044810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.044996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.983 qpair failed and we were unable to recover it. 00:34:19.983 [2024-05-15 16:55:27.045135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.045245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.983 [2024-05-15 16:55:27.045270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.045412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.045555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.045579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.045716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.045863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.045889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.046026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.046183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.046208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.046354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.046516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.046540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.046684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.046846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.046871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.046986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.047106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.047132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.047297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.047439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.047463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.047576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.047752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.047776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.047887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.048145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.048421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.048715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.048905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.049091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.049235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.049264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.049392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.049544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.049569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.049710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.049879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.049904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.050017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.050152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.050176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.050324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.050434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.050461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.050731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.050756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.050880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.051154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.051410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.051681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.051806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.051983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.052144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.052174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.052289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.052427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.052452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.052581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.052724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.052761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.052877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.053174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.053468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.984 [2024-05-15 16:55:27.053783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.984 [2024-05-15 16:55:27.053953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.984 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.054102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.054222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.054247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.054399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.054535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.054559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.054721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.054886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.054921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.055081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.055200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.055231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.055359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.055504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.055529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.055672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.055792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.055817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.055922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.056227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.056556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.056846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.056977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.057120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.057418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.057702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.057863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.058031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.058193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.058224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.058378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.058521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.058555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.058670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.058785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.058811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.058956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.059095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.059119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.059246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.059410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.059435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.059582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.059716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.059741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.059885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.060179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.060502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.985 qpair failed and we were unable to recover it. 00:34:19.985 [2024-05-15 16:55:27.060800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.985 [2024-05-15 16:55:27.060968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.061109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.061264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.061290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.061428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.061552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.061576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.061738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.061861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.061885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.062024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.062300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.062558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.062849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.062997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.063137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.063423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.063666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.063840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.063952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.064196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.064496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.064780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.064966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.065118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.065271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.065296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.065435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.065580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.065605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.065734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.065871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.065895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.066017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.066164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.066188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.066312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.066422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.066446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.066591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.066717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.066741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.066857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.067193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.067509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.067802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.067947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.068078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.068353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.068616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.068868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.068999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.069024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.069149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.069284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.986 [2024-05-15 16:55:27.069309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.986 qpair failed and we were unable to recover it. 00:34:19.986 [2024-05-15 16:55:27.069457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.069581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.069605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.069723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.069841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.069866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.070005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.070150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.070174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.070336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.070479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.070504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.070659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.070799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.070824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.070962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.071127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.071151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.071284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.071425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.071450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.071567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.071691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.071716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.071858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.072141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.072483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.072763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.072934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.073054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.073194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.073229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.073349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.073492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.073517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.073658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.073821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.073845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.073993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.074159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.074184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.074339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.074444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.074468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.074612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.074726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.074749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.074892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.075182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.075475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.075766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.075956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.076119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.076226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.076251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.076404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.076513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.076539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.076698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.076805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.076831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.076965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.077105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.077129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.077275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.077382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.077406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.077562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.077729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.077754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.077904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.078048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.078073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.078192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.078317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.987 [2024-05-15 16:55:27.078342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.987 qpair failed and we were unable to recover it. 00:34:19.987 [2024-05-15 16:55:27.078515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.078660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.078696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.078824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.078964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.078989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.079128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.079268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.079294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.079406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.079547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.079572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.079716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.079855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.079880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.079992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.080279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.080530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.080834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.080971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.081139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.081249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.081274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.081390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.081499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.081536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.081678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.081797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.081821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.081968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.082114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.082138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.082283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.082427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.082455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.082569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.082716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.082740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.082890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.083242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.083530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.083773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.083943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.084057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.084226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.084251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.084394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.084529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.084554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.084719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.084864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.084888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.085032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.085196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.085229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.085378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.085545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.085569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.085687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.085828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.085853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.085963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.086095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.086119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.988 qpair failed and we were unable to recover it. 00:34:19.988 [2024-05-15 16:55:27.086231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.988 [2024-05-15 16:55:27.086341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.086365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.086509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.086614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.086639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.086781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.086918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.086942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.087112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.087270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.087295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.087409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.087518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.087542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.087687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.087825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.087850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.087989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.088140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.088164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.088322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.088460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.088485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.088606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.088743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.088767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.088881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.089180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.089465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.089725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.089882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.090027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.090327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.090604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.090874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.090988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.091174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.091459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.091775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.091930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.092038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.092148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.092172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.092348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.092498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.092522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.092665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.092810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.092835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.092950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.093081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.093106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.093210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.093378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.093402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.093526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.093662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.989 [2024-05-15 16:55:27.093686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.989 qpair failed and we were unable to recover it. 00:34:19.989 [2024-05-15 16:55:27.093818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.093931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.093955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.094061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.094196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.094226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.094376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.094486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.094510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.094643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.094758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.094782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.094891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.095043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.095067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.095209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.095408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.095433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.095581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.095727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.095751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.095891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.096229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.096524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.096827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.096961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.097122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.097296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.097321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.097460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.097571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.097601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.097767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.097873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.097898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.098011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.098152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.098177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.098340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.098476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.098501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.098645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.098763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.098789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.098953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.099208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.099543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.099828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.099983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.100128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.100264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.100289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.990 [2024-05-15 16:55:27.100431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.100574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.990 [2024-05-15 16:55:27.100598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.990 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.100722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.100884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.100908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.101030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.101140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.101165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.101278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.101447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.101472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.101612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.101751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.101775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.101909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.102213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.102473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.102807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.102941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.103050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.103154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.103178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.103316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.103457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.103481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.103635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.103772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.103796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.103960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.104227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.104522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.104824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.104984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.105170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.105453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.105756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.105910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.106073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.106212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.106242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.106367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.106507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.106531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.106697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.106838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.106863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.106999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.107107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.107131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.107270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.107440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.107464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.107640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.107758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.107782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.107930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.108235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.108526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.108822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.108981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.991 qpair failed and we were unable to recover it. 00:34:19.991 [2024-05-15 16:55:27.109127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.991 [2024-05-15 16:55:27.109237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.109262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.109397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.109504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.109528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.109663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.109831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.109855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.109968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.110293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.110551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.110850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.110993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.111183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.111492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.111794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.111930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.112038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.112181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.112205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.112363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.112527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.112551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.112699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.112806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.112834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.112937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.113224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.113533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.113839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.113976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.114142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.114454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.114753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.114892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.115006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.115259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.115547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.115817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.115995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.116160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.116296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.116321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.116460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.116597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.116621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.116764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.116903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.116927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.117062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.117172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.117197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.117334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.117483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.117507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.992 qpair failed and we were unable to recover it. 00:34:19.992 [2024-05-15 16:55:27.117670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.992 [2024-05-15 16:55:27.117811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.117835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.117977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.118114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.118138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.118262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.118400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.118425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.118592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.118745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.118769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.118941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.119054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.119078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.119190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.119391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.119417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.119560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.119689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.119713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.119852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.120169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.120463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.120728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.120916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.121053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.121192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.121220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.121363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.121502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.121526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.121642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.121777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.121801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.121918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.122241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.122571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.122855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.122996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.123140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.123317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.123342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.123484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.123627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.123651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.123766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.123902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.123927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.124040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.124203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.124231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.124398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.124569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.124594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.124735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.124875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.125016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.125167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.993 [2024-05-15 16:55:27.125191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.993 qpair failed and we were unable to recover it. 00:34:19.993 [2024-05-15 16:55:27.125338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.125450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.125474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.125612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.125751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.125776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.125891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.126221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.126489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.126746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.126906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.127069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.127286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.127311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.127448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.127581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.127606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.127745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.127881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.127905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.128045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.128188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.128212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.128374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.128481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.128506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.128673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.128836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.128860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.128979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.129095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.129120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.129285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.129430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.129454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.129603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.129714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.129738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.129877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.130278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.130557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.130814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.130977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.131146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.131437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.131714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.131897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.132038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.132146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.132170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.132315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.132532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.132557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.994 [2024-05-15 16:55:27.132731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.132841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.994 [2024-05-15 16:55:27.132865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.994 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.133003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.133115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.133139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.133283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.133399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.133423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.133589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.133730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.133754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.133862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.134167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.134464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.134733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.134875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.135041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.135179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.135203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.135362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.135480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.135505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.135721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.135882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.135906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.136045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.136212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.136248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.136405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.136579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.136604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.136743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.136857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.136882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.136989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.137271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.137535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.137803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.137964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.138075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.138228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.138253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.138406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.138573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.138597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.138726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.138891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.138915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.139059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.139172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.139197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.139350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.139485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.139509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.995 [2024-05-15 16:55:27.139627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.139769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.995 [2024-05-15 16:55:27.139794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.995 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.139948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.140067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.140091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.140201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.140348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.140373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.140534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.140678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.140702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.140869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.141146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.141428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.141726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.141892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.142040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.142175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.142200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.142379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.142521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.142546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.142668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.142806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.142830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.142972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.143274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.143560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.143846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.143991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.144171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.144500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.144830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.144996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.145142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.145261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.145287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.145426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.145568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.145592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.145745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.145856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.145881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.146022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.146136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.146160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.146316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.146453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.146477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.146619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.146724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.146752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.146895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.147167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.147465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.147719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.147905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.148046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.148169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.148193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.148313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.148475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.148499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.148616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.148755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.148779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.996 qpair failed and we were unable to recover it. 00:34:19.996 [2024-05-15 16:55:27.148922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.149056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.996 [2024-05-15 16:55:27.149081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.149192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.149339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.149365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.149506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.149667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.149695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.149842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.150143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.150464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.150761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.150920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.151082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.151243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.151269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.151411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.151547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.151571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.151718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.151861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.151885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.151994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.152113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.152138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.152284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.152419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.152443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.152585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.152723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.152747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.152894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.153231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.153539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.153823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.153987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.154126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.154264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.154289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.154421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.154557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.154581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.154721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.154885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.154910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.155023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.155160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.155185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.155305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.155452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.155476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.155620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.155753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.155777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.155940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.156222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.156521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.156762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.156922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.157035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.157148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.157172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.997 qpair failed and we were unable to recover it. 00:34:19.997 [2024-05-15 16:55:27.157291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.997 [2024-05-15 16:55:27.157399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.157423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.157589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.157694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.157720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.157887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.158193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.158500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.158786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.158953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.159118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.159258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.159283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.159397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.159531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.159554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.159694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.159830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.159854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.159997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.160131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.160155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.160302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.160440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.160465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.160603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.160740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.160764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.160912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.161174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.161502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.161818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.161985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.162122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.162267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.162292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.162434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.162571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.162595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.162733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.162886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.162909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.163024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.163189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.163213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.163358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.163499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.163523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.163663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.163771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.163794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.163931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.164228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.164512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.164813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.164980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.998 [2024-05-15 16:55:27.165008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.998 qpair failed and we were unable to recover it. 00:34:19.998 [2024-05-15 16:55:27.165142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.165285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.165310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.165456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.165603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.165627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.165735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.165851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.165875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.166010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.166286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.166577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.166844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.166987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.167122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.167260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.167285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.167457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.167593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.167617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.167757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.167899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.167923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.168067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.168206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.168238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.168404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.168515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.168540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.168664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.168782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.168806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.168979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.169285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.169540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.169790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.169956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.170067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.170204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.170234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.170372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.170508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.170532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.170670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.170811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.170835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.170950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.171263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.171596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.171860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.171996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.172113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.172228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.172253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.172392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.172501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.172525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.172636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.172772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.172796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.172934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.173042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.173066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:19.999 [2024-05-15 16:55:27.173195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.173345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.999 [2024-05-15 16:55:27.173373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:19.999 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.173487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.173656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.173683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.173826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.173944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.173968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.174125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.174288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.174313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.174452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.174617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.174641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.174760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.174899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.174923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.175063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.175178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.000 [2024-05-15 16:55:27.175202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.000 qpair failed and we were unable to recover it. 00:34:20.000 [2024-05-15 16:55:27.175335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.175451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.175475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.175584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.175700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.175726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.175869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.175976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.176110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.176394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.176704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.176877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.177012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.177158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.177183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.177330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.177442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.177467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.177584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.177723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.177749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.177914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.178262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.178563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.178840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.178983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.179148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.179291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.179316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.179426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.179569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.179594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.179739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.179856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.179884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.180037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.180175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.180201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.180327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.180462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.180486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.180625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.180764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.180788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.180953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.181234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.181536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.181836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.181972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.182106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.182246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.182272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.182386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.182497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.182521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.182636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.182790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-05-15 16:55:27.182815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-05-15 16:55:27.182962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.183103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.183127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.183271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.183406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.183431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.183607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.183743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.183767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.183896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.184200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.184484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.184809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.184972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.185082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.185226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.185251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.185390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.185503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.185528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.185665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.185823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.185847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.186015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.186126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.186150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.186290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.186424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.186448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.186611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.186746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.186770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.186926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.187062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.187086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.187232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.187399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.187423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.187568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.187704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.187728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.187868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.188145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.188409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.188653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.188816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.188985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.189290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.189600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.189868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.189977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.190138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.190434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.190743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.190933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.191072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.191208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.191237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.191372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.191520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.191544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-05-15 16:55:27.191651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.191784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-05-15 16:55:27.191808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.191945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.192241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.192524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.192857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.192997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.193132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.193252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.193277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.193389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.193527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.193552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.193668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.193786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.193812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.193948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.194227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.194583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.194844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.194985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.195129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.195245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.195269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.195407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.195556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.195580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.195746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.195876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.195900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.196038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.196315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.196608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.196882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.196997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.197136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.197448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.197768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.197927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.198097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.198263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.198289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.198430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.198564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.198588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.198705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.198832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.198856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.198997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.199271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.199562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.199863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.199992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.200102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.200245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.200270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-05-15 16:55:27.200412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-05-15 16:55:27.200549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.200573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.200679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.200821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.200845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.200967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.201269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.201558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.201834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.201997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.202160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.202278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.202303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.202444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.202604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.202628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.202772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.202909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.202932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.203046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.203159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.203182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.203334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.203465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.203489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.203631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.203796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.203820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.203961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.204289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.204578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.204838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.204986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.205156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.205418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.205703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.205889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.206006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.206251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.206525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.206796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.206933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.207071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.207210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.207238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.207359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.207504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.207528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.207641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.207754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.207779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.207918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.208186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.208456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-05-15 16:55:27.208738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-05-15 16:55:27.208878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.209016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.209280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.209568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.209850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.209985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.210138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.210286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.210311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.210452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.210574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.210597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.210731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.210844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.210868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.211038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.211289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.211597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.211879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.211988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.212175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.212492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.212798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.212933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.213047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.213201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.213232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.213384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.213518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.213543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.213655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.213790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.213814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.213979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.214240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.214572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.214869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.214982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.215006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.215166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.215286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-05-15 16:55:27.215310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-05-15 16:55:27.215427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.215578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.215602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.215746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.215887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.215917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.216061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.216199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.216229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.216366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.216509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.216533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.216700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.216837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.216860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.217001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.217276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.217545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.217823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.217977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.218102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.218212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.218242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.218372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.218486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.218510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.218656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.218772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.218796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.218936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.219208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.219479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.219733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.219870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.219985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.220102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.220126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.220247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.220379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.220404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.220560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.220695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.220719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.220860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.221191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.221490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.221793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.221969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.222131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.222270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.222294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.222434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.222573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.222597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.222764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.222927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.222951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.223089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.223242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.223267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.223404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.223542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.223566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.223705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.223869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.223894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-05-15 16:55:27.224033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-05-15 16:55:27.224139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.224164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.224312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.224426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.224451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.224591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.224754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.224782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.224900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.225204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.225485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.225766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.225902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.226011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.226146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.226170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.226294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.226404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.226428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.226569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.226706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.226732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.226910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.227186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.227525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.227801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.227966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.228110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.228248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.228279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.228392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.228508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.228533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.228647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.228814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.228839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.228952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.229260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.229558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.229842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.229994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.230107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.230251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.230276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.230394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.230512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.230536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.230675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.230812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.230836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.230992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.231246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.231533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.231853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.231995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.232019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.232128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.232238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.232262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.232402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.232519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.232544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-05-15 16:55:27.232690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-05-15 16:55:27.232806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.232830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.232946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.233196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.233491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.233828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.233994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.234159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.234465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.234747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.234883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.235018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.235154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.235178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.235325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.235462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.235486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.235626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.235791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.235815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.235951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.236234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.236516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.236845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.236986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.237100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.237208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.237256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.237387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.237495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.237519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.237659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.237767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.237791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.237914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.238198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.238510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.238763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.238906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.239072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.239208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.239237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.239353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.239496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.239520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.239685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.239849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.239873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.239980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.240248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.240525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.240806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.240962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.241105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.241225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-05-15 16:55:27.241250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-05-15 16:55:27.241388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.241522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.241546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.241660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.241768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.241792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.241955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.242260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.242564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.242841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.242979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.243141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.243493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.243842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.243979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.244124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.244271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.244296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.244440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.244580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.244604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.244768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.244931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.244955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.245067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.245198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.245243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.245359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.245470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.245494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.245640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.245754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.245779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.245944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.246276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.246560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.246834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.246992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.247125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.247374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.247649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.247816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.247936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.248249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.248582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.248822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.248960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.249122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.249265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.249291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.249429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.249559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.249583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.249724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.249841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.249865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.250002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.250174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-05-15 16:55:27.250198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-05-15 16:55:27.250339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.250480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.250504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.250650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.250762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.250786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.250953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.251227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.251515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.251816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.251983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.252147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.252436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.252765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.252963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.253085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.253201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.253243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.253355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.253495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.253519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.253664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.253799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.253822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.253959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.254267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.254554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.254828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.254984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.255116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.255252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.255276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.255391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.255503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.255527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.255646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.255778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.255802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.255920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.256214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.256535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.256826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.256991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.257161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.257309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.257334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.257460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.257598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.257626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-05-15 16:55:27.257792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.257927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-05-15 16:55:27.257950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.258080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.258220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.258244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.258393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.258525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.258550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.258687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.258828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.258852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.258985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.259317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.259593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.259825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.259955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.260062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.260201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.260229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.260379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.260488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.260511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.260655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.260790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.260814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.260923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.261251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.261511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.261825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.261964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.262086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.262225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.262250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.262368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.262503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.262527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.262666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.262786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.262810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.262948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.263086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.263111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.263268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.263420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.263444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.263590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.263731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.263755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.263896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.264061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.264085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.264224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.264364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.264388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.264526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.264701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.264724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.264864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.265169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.265481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.265789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.265922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.266031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.266161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.266185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.266307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.266443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.266467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-05-15 16:55:27.266610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-05-15 16:55:27.266746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.266770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.266888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.266997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.267168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.267446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.267745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.267911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.268074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.268186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.268210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.268362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.268499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.268522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.268662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.268828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.268851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.268965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.269249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.269570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.269852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.269992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.270151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.270439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.270771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.270911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.271022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.271155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.271178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.271324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.271459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.271484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.271626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.271772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.271797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.271945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.272221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.272503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.272783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.272921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.273039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.273174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.273197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.273372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.273483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.273508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.273641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.273761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.273785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.273931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.274096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.274120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.274259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.274375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.274398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.274537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.274701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.274725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.274869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.275029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.275053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.275194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.275308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.275334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-05-15 16:55:27.275500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-05-15 16:55:27.275635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.275664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.275811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.275972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.275996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.276136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.276303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.276329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.276468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.276580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.276605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.276738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.276875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.276899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.277014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.277146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.277170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.277328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.277448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.277472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.277591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.277707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.277730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.277874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.278149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.278427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.278694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.278878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.279013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.279140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.279164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.279308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.279422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.279446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.279567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.279704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.279728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.279847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.280204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.280461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.280788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.280917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.281078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.281244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.281270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.281409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.281545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.281569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.281712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.281847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.281870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.282038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.282314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.282584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.282852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.282991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.283162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.283437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.283714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.283859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-05-15 16:55:27.283997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-05-15 16:55:27.284137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.284162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.284277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.284420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.284443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.284607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.284750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.284774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.284889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.284996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.285130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.285458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.285811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.285976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.286120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.286282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.286305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.286424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.286542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.286566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.286704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.286844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.286868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.287007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.287146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.287170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.287354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.287492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.287516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.287654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.287796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.287820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.287928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.288227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.288558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.288860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.288995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.289135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.289450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.289732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.289888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.290025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.290141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.290165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.290294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.290435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.290459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.290623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.290732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.290759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.290903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.291035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.291059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-05-15 16:55:27.291205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-05-15 16:55:27.291357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.291396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.291525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.291662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.291688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.291798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.291936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.291962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.292102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.292236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.292260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.292408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.292546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.292571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.292709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.292880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.292904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.293013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.293178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.293202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.293363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.293499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.293522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.293686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.293819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.293842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.293961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.294284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.294559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.294851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.294989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.295123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.295233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.295258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.295392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.295525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.295549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.295710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.295846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.295870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.296006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.296258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.296516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.296828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.296989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.297156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.297429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.297715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.297899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.298039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.298153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.298176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.298312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.298455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.298478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.298621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.298734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.298758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.298867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.299040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.299064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-05-15 16:55:27.299229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.299373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-05-15 16:55:27.299397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.299532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.299667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.299691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.299805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.299943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.299967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.300109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.300225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.300249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.300384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.300497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.300520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.300652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.300762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.300786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.300930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.301243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.301552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.301833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.301964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.302086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.302234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.302267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.302392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.302533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.302557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.302704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.302845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.302869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.302993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.303301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.303561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.303835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.303997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.304146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.304271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.304295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.304437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.304577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.304600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.304758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.304872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.304897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.305064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.305177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.305201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.305369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.305517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.305542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.305704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.305843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.305870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.306011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.306128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.306151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.306319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.306466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.306492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.306632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.306750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.306774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.306892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.307006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.307030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.307155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.307272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.307298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-05-15 16:55:27.307423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-05-15 16:55:27.307567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.307590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.307723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.307858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.307883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.308000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.308131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.308156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.308308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.308446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.308470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.308609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.308744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.308767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.308908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.309206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.309496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.309779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.309962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.310106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.310271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.310295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.310426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.310567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.310591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.310732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.310841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.310865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.310975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.311134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.311158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.311310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.311452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.311477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.311584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.311743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.311767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.311909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.312240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.312514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.312788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.312953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.313092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.313240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.313265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.313404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.313541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.313565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.313738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.313849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.313873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.314014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.314151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.314174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.314308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.314443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.314466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.314606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.314768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.314791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.314959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.315100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.315124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.315264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.315410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.315435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.315553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.315694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.315717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-05-15 16:55:27.315883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-05-15 16:55:27.316024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.316183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.316468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.316767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.316911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.317049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.317185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.317209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.317353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.317464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.317489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.317630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.317747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.317770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.317914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.318224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.318477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.318821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.318987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.319103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.319244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.319268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.319402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.319513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.319537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.319670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.319811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.319836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.319949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.320251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.320523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.320784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.320922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.321039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.321290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.321596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.321893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.321996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.322133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.322406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.322649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.322810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.322945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.323229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.323487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-05-15 16:55:27.323765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-05-15 16:55:27.323921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.324084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.324203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.324231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.324375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.324511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.324534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.324671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.324783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.324806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.324919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.325218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.325477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.325779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.325938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.326074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.326186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.326211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.326336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.326470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.326494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.326657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.326785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.326810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.326949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.327235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.327564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.327865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.327977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.328173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.328481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.328760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.328948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.329091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.329233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.329259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.329400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.329568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.329593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.329740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.329876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.329901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.330039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.330183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.330207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.330366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.330502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.330526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-05-15 16:55:27.330665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.330831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-05-15 16:55:27.330855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.330968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.331263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.331539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.331812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.331978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.332125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.332294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.332319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.332434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.332599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.332624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.332740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.332879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.332904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.333039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.333201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.333230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.333352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.333462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.333486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.333641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.333774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.333798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.333938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.334241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.334547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.334882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.334992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.335127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.335428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.335730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.335919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.336035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.336319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.336566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.336850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.336984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.337157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.337480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.337796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.337961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.338129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.338276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.338301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.338410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.338551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.338575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.338711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.338847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.338874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-05-15 16:55:27.339012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.339126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-05-15 16:55:27.339150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.339282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.339393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.339417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.339555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.339692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.339717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.339882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.340206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.340518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.340823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.340978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.341092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.341205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.341233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.341377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.341520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.341544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.341657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.341797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.341820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.341943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.342255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.342567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.342803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.342966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.343109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.343226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.343251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.343396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.343509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.343533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.343700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.343875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.343900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.344049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.344162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.344186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.344334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.344470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.344493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.344609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.344725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.344750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.344922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.345226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.345487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.345795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.345936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.346049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.346181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.346205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.346327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.346468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.346492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.346607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.346741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-05-15 16:55:27.346765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-05-15 16:55:27.346887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.347189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.347473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.347770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.347961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.348110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.348221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.348244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.348356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.348519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.348543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.348688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.348826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.348849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.348987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.349269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.349566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.349878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.349989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.350128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.350433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.350764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.350904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.351046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.351185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.351208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.351359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.351465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.351488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.351635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.351777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.351800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.351936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.352051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.352076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.352244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.352414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.352438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.352587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.352728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.352752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.352891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.353171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.353421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.353701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.353835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-05-15 16:55:27.354002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.354121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-05-15 16:55:27.354145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.354284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.354424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.354448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.354589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.354727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.354750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.354896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.355173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.355512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.355808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.355999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.356108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.356225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.356249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.356365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.356539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.356563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.356703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.356845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.356869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.356982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.357279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.357562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.357829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.357991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.358107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.358248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.358278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.358414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.358527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.358551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.358692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.358796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.358819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.358934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.359044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.359067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.359207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.359378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.359402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.359544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.359681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.359706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.359875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.360037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.360060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.360198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.360349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.360372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-05-15 16:55:27.360522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-05-15 16:55:27.360631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.360655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.360793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.360956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.360981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.361098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.361236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.361272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.361416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.361581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.361605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.361745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.361884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.361908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.362048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.362181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.362205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.362342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.362451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.362484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.362633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.362752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.362778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.362902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.363156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.363452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.363756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.363912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.364050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.364222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.364246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.364370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.364532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.364555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.364717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.364828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.364852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.364974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.365108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.365132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.365271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.365407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-05-15 16:55:27.365430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-05-15 16:55:27.365594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.365731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.365755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.365878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.366166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.366455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.366753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.366887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.367009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.367262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.367548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.367833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.367999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.368188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.368471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.368745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.368909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.369049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.369153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.369178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.369328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.369471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.369495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.369660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.369794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.369818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.369981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.370237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.370572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.370875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.370987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.371012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.371127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.371263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.371288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.371405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.371541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.371565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-05-15 16:55:27.371700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-05-15 16:55:27.371863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.371888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.372008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.372300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.372566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.372836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.372998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.373135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.373279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.373305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.373473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.373593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.373617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.373790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.373901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.373926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.374037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.374202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.374237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.374345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.374486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.374509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.374681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.374844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.374868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.375013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.375153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.375177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.375307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.375444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.375468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.375638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.375749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.375773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.375932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.376045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.376069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.376231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.376402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.376427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.376577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.376707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.376730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.376871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.377173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.377497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.377796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.377962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-05-15 16:55:27.378076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-05-15 16:55:27.378212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.378241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.378382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.378524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.378547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.378717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.378830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.378853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.378971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.379088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.379113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.379258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.379399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.379423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.379553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.379714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.379738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.379897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.380189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.380500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.380823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.380987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.381128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.381274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.381299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.381414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.381537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.381561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.381674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.381815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.381839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.381980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.382297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.382554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.382826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.382964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.383102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.383254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.383279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.383430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.383576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.383599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.383716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.383853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.383877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.384021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.384159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.384187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-05-15 16:55:27.384355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-05-15 16:55:27.384470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.384493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.384603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.384716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.384741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.384893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.385197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.385519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.385815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.385975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.386092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.386232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.386260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.386403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.386544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.386568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.386707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.386844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.386869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.386979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.387257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.387566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.387848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.387983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.388119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.388433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.388726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.388891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.389028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.389143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.389168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.389343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.389484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.389508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.389628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.389764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.389787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.389929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.390043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.390069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.390242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.390354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.390379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-05-15 16:55:27.390489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-05-15 16:55:27.390629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.390653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.390801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.390914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.390938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.391053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.391191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.391222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.391364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.391527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.391552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.391694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.391835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.391859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.391995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.392276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.392557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.392837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.392997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.393146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.393311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.393336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.393478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.393642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.393667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.393783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.393933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.393957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.394073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.394207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.394238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.394354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.394495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.394519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.394664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.394800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.394824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.394935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.395093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.395116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.395269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.395386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.395410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.395579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.395711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.395735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.395898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.396200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.396530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-05-15 16:55:27.396805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-05-15 16:55:27.396941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.397078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.397190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.397220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.397337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.397480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.397505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.397645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.397760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.397784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.397900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.398228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.398535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.398833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.398962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.399097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.399263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.399288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.399437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.399579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.399603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.399739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.399850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.399873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.400023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.400135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.400160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.400311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.400446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.400470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.400608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.400724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.400749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.400889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.401172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.401444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.401697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.401863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.402024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.402136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.402164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.402295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.402434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-05-15 16:55:27.402459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-05-15 16:55:27.402620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.402731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.402755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.402877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.403181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.403492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.403772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.403902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.404036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.404150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.404175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.404352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.404501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.404525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.404692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.404799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.404823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.404958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.405072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.405096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.405209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.405358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.405382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.405557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.405706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.405730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.405886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.406180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.406522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.406791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.406942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.407050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.407190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.407213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.407355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.407520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.407555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.407671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.407796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.407821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.407934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.408073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.307 [2024-05-15 16:55:27.408097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.307 [2024-05-15 16:55:27.408244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.408393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.408418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.408557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.408717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.408741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.408878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.409156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.409458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.409769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.409901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.410030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.410195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.410224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.410369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.410506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.410530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.410672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.410811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.410835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.410953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.411249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.411506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.411776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.411943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.412088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.412258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.412283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.412393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.412505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.412530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.412663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.412771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.412795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.412939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.413213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.413505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.413842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.413981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.414100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.414220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.414245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.414357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.414497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.414522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.414637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.414761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.414785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.414923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.415033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.415058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.415197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.415325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.308 [2024-05-15 16:55:27.415349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.308 qpair failed and we were unable to recover it. 00:34:20.308 [2024-05-15 16:55:27.415466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.415597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.415620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.415731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.415889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.415913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.416022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.416137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.416161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.416308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.416465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.416489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.416632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.416746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.416771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.416909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.417200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.417545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.417819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.417989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.418108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.418247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.418281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.418419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.418581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.418605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.418743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.418861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.418885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.419022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.419162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.419185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.419338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.419448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.419473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.419611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.419774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.419799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.419913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.420227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.420488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.420780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.420950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.421098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.421261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.421286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.421424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.421561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.421585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.421755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.421890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.421913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.422030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.422282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.422561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.422840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.422995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.423141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.423282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.309 [2024-05-15 16:55:27.423308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.309 qpair failed and we were unable to recover it. 00:34:20.309 [2024-05-15 16:55:27.423422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.423540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.423564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.423681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.423793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.423817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.423982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.424281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.424567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.424823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.424959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.425097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.425252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.425277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.425417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.425535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.425558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.425694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.425823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.425847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.426022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.426303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.426568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.426845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.426979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.427093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.427255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.427281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.427415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.427527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.427550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.427689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.427849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.427872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.427979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.428243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.428531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.428805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.428944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.429081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.429247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.429273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.429388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.429549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.429573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.429724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.429853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.429877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.430018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.430131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.430155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.430306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.430426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.430450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.430586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.430747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.430770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.430915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.431055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.431079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.310 qpair failed and we were unable to recover it. 00:34:20.310 [2024-05-15 16:55:27.431220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.310 [2024-05-15 16:55:27.431349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.431373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.431494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.431659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.431683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.431844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.431980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.432129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.432461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.432737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.432914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.433030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.433162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.433186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.433311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.433450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.433474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.433597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.433731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.433756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.433894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.434169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.434472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.434801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.434940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.435068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.435205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.435233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.435370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.435506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.435537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.435690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.435829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.435855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.436021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.436129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.436152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.436302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.436441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.436465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.436610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.436739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.436763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.436896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.437193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.437494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.437771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.437927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.438075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.438213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.438243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.438362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.438502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.438525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.438666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.438808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.438832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.438999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.439114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.439140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.311 [2024-05-15 16:55:27.439261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.439403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.311 [2024-05-15 16:55:27.439427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.311 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.439551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.439662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.439685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.439841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.439989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.440153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.440431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.440720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.440914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.441045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.441205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.441234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.441353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.441489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.441512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.441654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.441768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.441792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.441936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.442098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.442122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.442277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.442420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.442445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.442584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.442697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.442721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.442895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.443147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.443446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.443729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.443885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.443999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.444170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.444194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.444335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.444444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.444468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.444585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.444723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.444747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.444890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.445194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.445488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.445808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.445948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.446063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.446207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.446237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.312 qpair failed and we were unable to recover it. 00:34:20.312 [2024-05-15 16:55:27.446378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.312 [2024-05-15 16:55:27.446546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.446571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.446711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.446876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.446901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.447020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.447347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.447634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.447876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.447983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.448168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.448452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.448749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.448905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.449069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.449232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.449257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.449374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.449510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.449534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.449695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.449846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.449871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.450004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.450141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.450170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.450312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.450433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.450458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.450599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.450764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.450788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.450902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.451200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.451466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.451744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.451884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.452022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.452154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.452178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.452306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.452439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.452463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.452580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.452743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.452767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.452925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.453069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.453093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.453223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.453368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.453392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.453553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.453691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.453716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.453859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.454142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.454430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.454678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.454839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.454987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.455101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.455124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.455240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.455363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.313 [2024-05-15 16:55:27.455387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.313 qpair failed and we were unable to recover it. 00:34:20.313 [2024-05-15 16:55:27.455526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.455671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.455695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.455818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.455956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.455981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.456149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.456291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.456315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.456457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.456600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.456623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.456764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.456882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.456905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.457014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.457291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.457573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.457850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.457978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.458136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.458441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.458793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.458933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.459054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.459175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.459199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.459359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.459496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.459520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.459635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.459796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.459821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.459956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.460245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.460560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.460837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.460976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.461115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.461243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.461267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.461407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.461550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.461573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.461700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.461821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.461844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.461960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.462262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.462564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.462811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.462974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.463116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.463257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.463281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.463412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.463561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.463584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.463705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.463846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.463869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.464017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.464322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.464599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.314 qpair failed and we were unable to recover it. 00:34:20.314 [2024-05-15 16:55:27.464847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.314 [2024-05-15 16:55:27.464984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.465135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.465415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.465729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.465915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.466033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.466150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.466174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.466344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.466463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.466493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.466633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.466795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.466818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.466985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.467096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.467119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.467277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.467411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.467436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.467577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.467716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.467740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.467885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.468190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.468445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.468736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.468904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.469048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.469189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.469213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.469360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.469468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.469492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.469631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.469744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.469767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.469908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.470184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.470492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.470771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.470926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.471048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.471168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.471193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047570 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.471249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20550f0 (9): Bad file descriptor 00:34:20.315 [2024-05-15 16:55:27.471503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.471666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.471694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.471821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.471962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.471988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.472121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.472258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.472284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.472429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.472601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.472626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.472788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.472904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.472929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.473051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.473165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.473190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.473346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.473483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.473508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.473663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.473804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.473829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.473969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.474128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.474159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.474311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.474429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.474453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.315 qpair failed and we were unable to recover it. 00:34:20.315 [2024-05-15 16:55:27.474574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.315 [2024-05-15 16:55:27.474736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.474761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.474880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.475165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.475456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.475758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.475921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.476089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.476226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.476252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.476391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.476533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.476558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.476673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.476788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.476813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.476976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.477110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.477135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.477301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.477465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.477491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.477630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.477739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.477764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.477878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.478188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.478480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.478784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.478916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.479022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.479166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.479192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f44000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.479330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.479469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.479497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.479622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.479760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.479785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.479896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.480171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.480423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.480703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.480847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.480980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.481242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.481540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.481803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.481934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.482039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.482180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.482204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.482337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.482479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.482503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.482659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.482780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.482804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.316 [2024-05-15 16:55:27.482926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.483042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.316 [2024-05-15 16:55:27.483068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.316 qpair failed and we were unable to recover it. 00:34:20.317 [2024-05-15 16:55:27.483224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.483370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.483394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.317 qpair failed and we were unable to recover it. 00:34:20.317 [2024-05-15 16:55:27.483506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.483645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.483669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.317 qpair failed and we were unable to recover it. 00:34:20.317 [2024-05-15 16:55:27.483838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.483949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.483973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.317 qpair failed and we were unable to recover it. 00:34:20.317 [2024-05-15 16:55:27.484089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.484200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.484231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.317 qpair failed and we were unable to recover it. 00:34:20.317 [2024-05-15 16:55:27.484378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.484485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.317 [2024-05-15 16:55:27.484509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.317 qpair failed and we were unable to recover it. 00:34:20.317 [2024-05-15 16:55:27.484640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.484781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.484807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.484950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.485247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.485554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.485848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.485980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.486097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.486236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.486262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.486378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.486489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.486514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.486622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.486735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.486761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.486900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.487153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.487431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.487725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.487855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.488000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.488236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.488579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.488852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.488992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.489098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.489240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.489265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.489384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.489545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.489569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.489708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.489868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.489892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.490002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.490114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.490138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-05-15 16:55:27.490283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-05-15 16:55:27.490431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.490455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.490575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.490714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.490739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.490880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.491163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.491468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.491736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.491897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.492020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.492292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.492550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.492826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.492970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.493088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.493206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.493243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.493387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.493518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.493543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.493698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.493843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.493867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.493981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.494270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.494530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.494837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.494972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.495079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.495219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.495244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.495373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.495506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.495530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.495642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.495761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.495785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.495933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.496190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.496484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.496782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.496952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.497070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.497210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.497250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.497385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.497559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.497584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.497696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.497840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.497864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.497977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.498138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.498163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.498317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.498437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-05-15 16:55:27.498461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-05-15 16:55:27.498603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.498744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.498769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.498877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.498997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.499163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.499475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.499782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.499942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.500056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.500195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.500225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.500344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.500457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.500481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.500619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.500754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.500779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.500943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.501247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.501518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.501791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.501956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.502121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.502283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.502309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.502425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.502567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.502592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.502731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.502844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.502870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.503014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.503166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.503190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.503319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.503427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.503451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.503593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.503720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.503745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.503886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.504031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.504055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.504199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.504342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-05-15 16:55:27.504368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-05-15 16:55:27.504481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.504596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.504620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.504735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.504868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.504893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.505005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.505244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.505530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.505833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.505976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.506094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.506207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.506264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.506384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.506524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.506549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.506668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.506831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.506855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.506994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.507315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.507564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.507842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.507987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.508148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.508454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.508782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.508976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.509118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.509226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.509252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.509365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.509477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.509501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.509669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.509832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.509857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.509972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.510251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.510532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.510853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.510990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.511135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.511263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.511298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.511422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.511566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.511590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.511728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.511888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.511917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.512030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.512141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.512168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.512321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.512466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.512491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-05-15 16:55:27.512615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-05-15 16:55:27.512754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.512778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.512897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.513187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.513498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.513747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.513916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.514031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.514306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.514597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.514847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.514987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.515159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.515455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.515767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.515919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.516063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.516176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.516200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.516347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.516457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.516481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.516627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.516763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.516787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.516911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.517198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.517468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.517792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.517959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.518070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.518185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.518209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.518387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.518528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.518552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.518691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.518857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.518881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.519027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.519166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.519190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.519321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.519440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.519464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.519580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.519685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.519710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.519876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.520152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.520453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.520734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.520910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.521049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.521195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.521232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-05-15 16:55:27.521373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-05-15 16:55:27.521483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.521508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.521673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.521798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.521823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.521939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.522077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.522101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.522241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.522404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.522429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.522594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.522721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.522746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.522920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.523225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.523528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.523802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.523962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.524110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.524231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.524264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.524403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.524541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.524566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.524703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.524821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.524845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.524967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.525241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.525500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.525779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.525915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.526054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.526165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.526189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.526302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.526423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.526447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.526565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.526704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.526728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.526864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.527169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.527475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.527753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.527918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.528053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.528190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.528221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-05-15 16:55:27.528365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-05-15 16:55:27.528482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.528506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.528633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.528774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.528798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.528939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.529185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.529516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.529821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.529958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.530103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.530212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.530257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.530372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.530477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.530502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.530674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.530786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.530810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.530928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.531197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.531459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.531790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.531978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.532086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.532245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.532270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.532386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.532501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.532525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.532666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.532780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.532804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.532919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.533224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.533482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.533782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.533946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.534086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.534254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.534279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.534389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.534506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.534531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.534666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.534807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.534832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.534968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.535106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.535130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.535299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.535440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.535465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.535625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.535767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.535792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.535904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.536176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.536432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-05-15 16:55:27.536739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-05-15 16:55:27.536872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.537008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.537173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.537198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.537352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.537494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.537518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.537631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.537778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.537802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.537940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.538212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.538499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.538836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.538971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.539114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.539253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.539278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.539415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.539522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.539546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.539689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.539825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.539850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.540024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.540137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.540162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.540308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.540418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.540442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.540608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.540748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.540772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.540904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.541156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.541414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.541695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.541873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.542011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.542120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.542146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.542288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.542426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.542450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.542592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.542698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.542722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.542858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.543167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.543476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.543743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.543910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.544051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.544192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.544231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.544361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.544478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.544511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.544629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.544791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.544815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.544938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.545101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.545126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.545243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.545396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.545420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-05-15 16:55:27.545612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-05-15 16:55:27.545734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.545758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.545898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.546172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.546455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.546709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.546870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.546990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.547263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.547507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.547767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.547927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.548071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.548212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.548242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.548359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.548475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.548499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.548662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.548803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.548827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.548943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.549270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.549551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.549833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.549998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.550115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.550248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.550272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.550410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.550549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.550573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.550714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.550835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.550860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.550970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.551220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.551514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.551830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.551965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.552136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.552253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.552278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.552401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.552516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.552541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.552652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.552768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.552792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.552938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.553247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.553537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-05-15 16:55:27.553833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-05-15 16:55:27.553972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.554088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.554205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.554235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.554353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.554464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.554494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.554633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.554770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.554794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.554907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.555161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.555470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.555750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.555910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.556055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.556197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.556228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.556369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.556509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.556535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.556653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.556789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.556815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.556956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.557280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.557540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.557828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.557994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.558104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.558248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.558273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.558397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.558516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.558544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.558659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.558771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.558795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.558917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.559190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.559507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.559774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.559924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.560062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.560175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.560200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.560334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.560472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.560496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.560638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.560777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.560802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.560942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.561085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.561111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-05-15 16:55:27.561276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-05-15 16:55:27.561388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.561419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.561534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.561653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.561678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.561817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.561959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.561984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.562121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.562263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.562288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.562408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.562534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.562558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.562701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.562843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.562867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.562983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.563291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.563576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.563849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.563985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.564123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.564264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.564294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.564429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.564571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.564595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.564715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.564855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.564879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.564990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.565285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.565554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.565830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.565994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.566132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.566259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.566284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.566423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.566562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.566588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.566730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.566889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.566914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.567050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.567193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.567224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.567371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.567508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.567533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.567649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.567759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.567783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.567939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.568188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.568500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.568830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.568978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.569170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.569463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-05-15 16:55:27.569712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-05-15 16:55:27.569850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.569963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.570264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.570572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.570814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.570947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.571115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.571224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.571249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.571385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.571503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.571528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.571666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.571838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.571862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.571973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.572256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.572568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.572835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.572992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.573131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.573456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.573756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.573886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.574016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.574305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.574559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.574809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.574976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.575142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.575281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.575307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.575447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.575591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.575615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.575757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.575898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.575922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.576061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.576201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.576237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.576398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.576520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.576544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.576709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.576872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.576896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.577035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.577147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.577171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.577285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.577426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.577452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.577599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.577711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.577736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.577896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.578043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.578067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.578211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.578338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-05-15 16:55:27.578362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-05-15 16:55:27.578479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.578649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.578673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.578840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.578976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.579115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.579415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.579702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.579861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.579973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.580273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.580546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.580827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.580987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.581128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.581409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.581684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.581821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.581970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.582248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.582513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.582813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.582950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.583091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.583203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.583235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.583374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.583484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.583508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.583674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.583811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.583836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.583952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.584069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.584095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.584227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.584370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.584396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.584539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.584703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.584728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.584871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.585178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.585459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.585738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.585928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.586089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.586201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.586230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.586350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.586494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.586518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.586652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.586775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-05-15 16:55:27.586801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-05-15 16:55:27.586941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.587048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.587072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.587240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.587382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.587406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.587572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.587683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.587708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.587856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.588187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.588497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.588791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.588937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.589079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.589192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.589226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.589393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.589523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.589547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.589670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.589777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.589802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.589942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.590227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.590482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.590787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.590934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.591045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.591208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.591237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.591353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.591526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.591551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.591693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.591809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.591835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.591971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.592278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.592535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.592785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.592949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.593086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.593225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.593250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.593363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.593501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.593525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.593666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.593783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.593808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.593946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.594064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.594089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.594244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.594411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.594436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.594577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.594740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.594765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.594898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.595014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.595038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-05-15 16:55:27.595151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.595304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-05-15 16:55:27.595330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.595440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.595583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.595609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.595779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.595890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.595916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.596038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.596200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.596230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.596339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.596454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.596478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.596593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.596739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.596764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.596926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.597187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.597440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.597703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.597850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.597993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.598157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.598181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.598307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.598450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.598475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.598616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.598758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.598783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.598896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.599204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.599494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.599748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.599908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.600071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.600169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.600194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.600363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.600482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.600508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.600634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.600775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.600800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.600915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.601190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.601490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.601774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.601935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.602053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.602223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.602249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.602364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.602502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.602530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-05-15 16:55:27.602671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-05-15 16:55:27.602810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.602835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.602974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.603108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.603132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.603253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.603404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.603428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.603552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.603692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.603717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.603856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.604190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.604536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.604887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.604998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.605134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.605410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.605742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.605927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.606066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.606182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.606206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.606360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.606513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.606537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.606701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.606814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.606839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.606986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.607238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.607564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.607845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.607982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.608149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.608286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.608312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.608465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.608587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.608617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.608740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.608900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.608925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.609069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.609208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.609239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.609403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.609545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.609569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.609685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.609819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.609843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.609988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.610239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.610552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.610825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.610991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.611155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.611267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.611292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-05-15 16:55:27.611402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-05-15 16:55:27.611507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.611535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.611661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.611779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.611803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.611929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.612234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.612518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.612838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.612978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.613110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.613224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.613249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.613387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.613499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.613524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.613630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.613767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.613792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.613927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.614194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.614554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.614834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.614992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.615100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.615241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.615267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.615389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.615537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.615562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.615682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.615817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.615843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.615973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.616243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.616552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.616826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.616960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.617113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.617257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.617282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.617437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.617574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.617600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.617763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.617870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.617894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.618054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.618170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.618194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.618356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.618468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.618491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.618629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.618795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.618820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.618962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.619232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.619497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-05-15 16:55:27.619774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-05-15 16:55:27.619963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.620099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.620230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.620255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.620409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.620521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.620545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.620653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.620768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.620793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.620912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.621190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.621458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.621754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.621930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.622074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.622210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.622240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.622406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.622545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.622569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.622694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.622797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.622821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.622959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.623227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.623497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.623798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.623931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.624066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.624229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.624254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.624397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.624540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.624564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.624669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.624805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.624830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.624997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.625274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.625532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.625845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.625984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.626132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.626279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.626305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.626444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.626567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.626592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.626711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.626850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.626874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.627012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.627126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.627151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.627284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.627397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.627421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.627597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.627732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.627756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.627881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.628017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.628041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-05-15 16:55:27.628145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.628269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-05-15 16:55:27.628295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.628412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.628559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.628584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.628725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.628894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.628918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.629064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.629225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.629249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.629363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.629478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.629503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.629623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.629784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.629809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.629918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.630252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.630570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.630851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.630981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.631141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.631253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.631283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.631414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.631535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.631561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.631680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.631804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.631829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.631987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.632260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.632530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.632811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.632973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.633111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.633248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.633273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.633441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.633562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.633586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.633698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.633823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.633848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.633986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.634121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.634145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.634263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.634383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.634408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.634561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.634699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.634723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.634861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.635031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.635055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.635197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.635321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.635348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-05-15 16:55:27.635485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.635592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-05-15 16:55:27.635617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.635756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.635874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.635898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.636017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.636128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.636154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.636262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.636385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.636410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.636558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.636723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.636747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.636886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.637171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.637477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.637753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.637921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.638066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.638180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.638206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.638360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.638483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.638507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.638648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.638754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.638779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.638898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.639198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.639472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.639755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.639879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.640021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.640337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.640618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.640869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.640997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.641107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.641247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.641278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.641446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.641564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.641589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.641727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.641876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.641901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.642034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.642320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.642578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.642850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.642980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.643146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.643255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.643280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.643449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.643617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.643642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-05-15 16:55:27.643765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.643908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-05-15 16:55:27.643933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.644097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.644264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.644289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.644434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.644547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.644571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.644735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.644870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.644896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.645015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.645155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.645179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.645327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.645458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.645482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.645623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.645763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.645788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.645926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.646196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.646463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.646737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.646906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.647047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.647212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.647244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.647354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.647493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.647518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.647625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.647762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.647786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.647928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.648201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.648505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.648785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.648949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.649063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.649176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.649201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.649367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.649510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.649541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.649655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.649804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.649829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.649975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.650228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.650535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.650843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.650979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.651118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.651279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.651304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.651440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.651579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.651603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.651738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.651845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.651869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.652020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.652154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.652178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-05-15 16:55:27.652294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.652429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-05-15 16:55:27.652458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.652603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.652717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.652741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.652876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.653154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.653410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.653707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.653840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.653983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.654268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.654551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.654800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.654958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.655098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.655233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.655262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.655401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.655541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.655567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.655709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.655849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.655874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.656037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.656179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.656203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.656371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.656491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.656516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.656657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.656762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.656787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.656939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.657232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.657565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.657818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.657956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.658107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.658226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.658253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.658424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.658547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.658572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.658736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.658875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.658899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.659035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.659195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.659225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.659349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.659464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.659488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.659622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.659757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.659781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.659919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.660196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.660505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.660808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.660993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-05-15 16:55:27.661137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-05-15 16:55:27.661278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.661303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.661429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.661573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.661598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.661737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.661878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.661902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.662066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.662202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.662233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.662372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.662512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.662537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.662674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.662846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.662871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.662978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.663273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.663530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.663851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.663991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.664135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.664401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.664661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.664822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.664959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.665258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.665574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.665819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.665956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.666101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.666238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.666263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.666379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.666510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.666535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.666671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.666809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.666834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.666971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.667236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.667528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.667825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.667979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.668092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.668256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.668281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.668416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.668554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.668578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.668718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.668831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.668856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.669021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.669159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.669183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-05-15 16:55:27.669304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.669448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-05-15 16:55:27.669472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.669580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.669718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.669743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.669881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.670189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.670505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.670775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.670909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.671033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.671289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.671577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.671853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.671988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.672134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.672418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.672697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.672829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.672977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.673112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.673136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.673277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.673386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.673411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.673544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.673682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.673707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.673866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.674168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.674484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.674754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.674928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.675040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.675158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.675183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-05-15 16:55:27.675300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.675416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-05-15 16:55:27.675441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.675550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.675667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.675691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.675814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.675946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.675971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.676094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.676208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.676244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.676389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.676531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.676555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.676700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.676834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.676858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.676977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.677236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.677536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.677865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.677991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.678144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.678412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.678697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.678859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.679026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.679142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.679166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.679305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.679445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.679470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.679613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.679751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.679775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.679881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.680207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.680524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.680802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.680941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.681049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.681187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.681212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.681381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.681518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.681543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.681661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.681828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.681852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.681960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.682238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.682533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.682830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.682958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.683073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.683180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.683205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-05-15 16:55:27.683325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-05-15 16:55:27.683437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.683462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.683566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.683701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.683726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.683873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.684187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.684471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.684805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.684969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.685078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.685188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.685213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.685359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.685472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.685498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.685671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.685813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.685838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.685973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.686274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.686579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.686878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.686990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.687154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.687441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.687766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.687906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.688052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.688222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.688246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.688389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.688531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.688555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.688665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.688813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.688838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.688981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.689239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.689564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.689825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.689994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.690109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.690222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.690247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.690365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.690532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.690556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.690671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.690776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.690801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-05-15 16:55:27.690915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.691052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-05-15 16:55:27.691076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.691221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.691345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.691369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.691489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.691622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.691646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.691788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.691926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.691952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.692084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.692228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.692253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.692365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.692479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.692504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.692671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.692840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.692864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.692978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.693238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.693503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.693787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.693927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.694047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.694157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.694182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.694328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.694464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.694489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.694628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.694771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.694795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.694939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.695220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.695551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.695834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.695994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.696132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.696265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.696294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.696437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.696554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.696578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.696694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.696833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.696857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.696972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.697264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.697553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.697851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.697985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.698101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.698241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.698266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-05-15 16:55:27.698383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-05-15 16:55:27.698522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.698546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.698690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.698800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.698826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.698963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.699237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.699511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.699853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.699988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.700157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.700427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.700703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.700843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.700956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.701303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.701604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.701881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.701995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.702166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.702434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.702688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.702824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.702939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.703277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.703562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.703815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.703957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.704096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.704255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.704280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.704434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.704569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.704593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.704705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.704840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.704865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.704984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.705290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.705599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-05-15 16:55:27.705859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.705998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-05-15 16:55:27.706023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.706163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.706278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.706304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.706441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.706592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.706616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.706753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.706888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.706912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.707033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.707274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.707548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.707823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.707985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.708150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.708449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.708827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.708963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.709096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.709206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.709236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.709355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.709467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.709492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.709662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.709801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.709826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.709989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.710125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.710149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.710286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.710419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.710444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.710662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.710796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.710820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.710990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.711283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.711557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.711831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.711994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.712158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.712275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.712301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-05-15 16:55:27.712407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-05-15 16:55:27.712524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.712550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.712670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.712784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.712809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.712949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.713244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.713576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.713840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.713985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.714175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.714464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.714769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.714931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.715101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.715317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.715342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.715478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.715593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.715617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.715752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.715968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.715992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.716113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.716250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.716276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.716402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.716531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.716555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.716703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.716871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.716896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.717044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.717379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.717628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.717866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.717983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.718119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.718450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.718758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.718919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.719038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.719201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.719230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.719408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.719547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.719571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.719688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.719806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.719830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.719962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.720105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.720129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.720347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.720566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.720590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.720758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.720871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.720896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.721041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.721183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.721208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-05-15 16:55:27.721327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-05-15 16:55:27.721468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.721493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.721655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.721768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.721794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.721939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.722207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.722528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.722847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.722981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.723175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.723529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.723804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.723963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.724073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.724183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.724207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.724373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.724482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.724509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.724671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.724811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.724835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.724972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.725140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.725165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.725282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.725421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.725447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.725560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.725695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.725720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.725862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.726192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.726482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.726743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.726911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.727051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.727220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.727245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.727392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.727507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.727533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.727674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.727833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.727857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.728001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.728288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.728550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.728838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.728982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.729006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-05-15 16:55:27.729110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.729255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-05-15 16:55:27.729281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.729400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.729563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.729588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.729710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.729846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.729870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.730019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.730156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.730181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.730312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.730427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.730453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.730571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.730722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.730746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.730916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.731195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.731477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.731816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.731982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.732145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.732431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.732713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.732851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.732970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.733246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.733549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.733851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.733991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.734131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.734408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.734649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.734811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.734976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.735246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.735504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.735769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.735914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.736050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.736269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.736294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.736428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.736578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.736602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-05-15 16:55:27.736743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-05-15 16:55:27.736879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.736904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.737067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.737209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.737285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.737436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.737604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.737628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.737743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.737858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.737882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.738044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.738310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.738618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.738883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.738995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.739163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.739505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.739804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.739966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.740106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.740250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.740281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.740423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.740533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.740559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.740680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.740818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.740843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.740988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.741290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.741559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.741839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.741979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.742141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.742442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.742698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.742940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.743061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.743198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.743228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.743346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.743462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.743486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.743591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.743729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.743753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.743971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.744112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.744141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-05-15 16:55:27.744251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.744392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-05-15 16:55:27.744417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.744528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.744678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.744703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.744920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.745223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.745497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.745778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.745913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.746076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.746237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.746263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.746433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.746595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.746619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.746727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.746888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.746913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.747065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.747208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.747243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.747462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.747602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.747628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.747774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.747911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.747936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.748049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.748185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.748209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.748386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.748493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.748528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.748667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.748810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.748834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.748945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.749080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.749105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.749275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.749411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.749436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.749603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.749750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.749774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.749914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.750210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.750501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.750772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.750932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.751095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.751234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.751268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.751386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.751535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.751559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.751671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.751811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.751837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.751949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.752261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.752534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.752838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.752978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.753003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-05-15 16:55:27.753137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.753273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-05-15 16:55:27.753299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.753422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.753563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.753588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.753690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.753804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.753829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.753950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.754247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.754524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.754796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.754956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.755072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.755245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.755271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.755412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.755560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.755585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.755702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.755827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.755851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.755995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.756310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.756567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.756834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.756976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.757143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.757429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.757696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.757834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.757949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.758233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.758536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.758804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.758945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.759064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.759240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.759265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.759405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.759517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.759543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.759687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.759833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.759858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.759978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.760111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.760136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.760274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.760406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.760431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-05-15 16:55:27.760576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.760737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-05-15 16:55:27.760761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.760894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.761195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.761542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.761808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.761939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.762085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.762199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.762232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.762371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.762542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.762566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.762683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.762828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.762852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.762991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.763152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.763177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.763326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.763488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.763520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.763684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.763798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.763822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.763935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.764048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.764074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.764196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.764311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.764336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.764474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.764612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.764636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.764798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.765014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.765039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.765179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.765350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.765376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.765517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.765625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.765649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.765865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.766137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.766445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.766719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.766901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.767070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.767208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.767238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.767382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.767497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.767521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.767637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.767771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.767796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.767910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.768015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.768039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-05-15 16:55:27.768185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-05-15 16:55:27.768321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.768346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.768488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.768633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.768657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.768791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.768924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.768949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.769085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.769234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.769259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.769396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.769534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.769559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.769708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.769845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.769870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.770009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.770149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.770174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.770298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.770462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.770486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.770654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.770787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.770811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.770929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.771220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.771501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.771839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.771969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.772086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.772227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.772252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.772369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.772501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.772526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.772696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.772810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.772834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.772974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.773116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.773140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.773279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.773394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.773420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.773541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.773759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.773784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.773949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.774262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.774530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.774808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.774946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.775083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.775250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.775275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.775397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.775541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.775565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.775707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.775861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.775885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.776021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.776313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.776588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-05-15 16:55:27.776864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.776978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-05-15 16:55:27.777003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.777125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.777269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.777294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.777410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.777571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.777596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.777736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.777851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.777877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.777987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.778125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.778150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.778371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.778487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.778513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.778656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.778874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.778898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.779064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.779181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.779205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.779349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.779482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.779506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.779645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.779765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.779789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.779913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.780190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.780524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.780785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.780919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.781060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.781179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.781203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.781349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.781490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.781514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.781656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.781797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.781823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.781934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.782236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.782521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.782781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.782941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.783082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.783197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.783228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.783369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.783507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.783533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.783694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.783805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.783829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.783948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.784110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.784135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-05-15 16:55:27.784275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.784413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-05-15 16:55:27.784438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.784604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.784716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.784740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.784850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.784989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.785145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.785394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.785663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.785827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.785970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.786077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.786105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.786245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.786416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.786441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.786596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.786764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.786788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.786930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.787209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.787500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.787781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.787974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.788083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.788210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.788241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.788383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.788508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.788533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.788656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.788822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.788846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.788955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.789236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.789563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.789848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.789989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.790124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.790262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.790288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.790424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.790586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.790610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.790719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.790841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.790865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.790982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.791122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.791146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.791296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.791461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.791485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.791628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.791744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.791770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.791888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.792166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.792449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.792831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.792997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.793112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.793252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.793277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.793397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.793559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.793583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-05-15 16:55:27.793713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.793877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-05-15 16:55:27.793901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.794011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.794148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.794172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.794290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.794429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.794453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.794565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.794701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.794725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.794892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.795174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.795512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.795845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.795999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.796109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.796252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.796278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.796414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.796538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.796562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.796701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.796817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.796843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.796961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.797085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.797110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.797227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.797380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.797405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.797544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.797788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.797812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.797949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.798267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.798540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.798787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.798923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.799060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.799277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.799302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.799450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.799564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.799588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.799725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.799866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.799891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.800008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.800283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.800563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.800868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.800978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.801150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.801433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.801729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-05-15 16:55:27.801973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-05-15 16:55:27.802113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.802267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.802294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.802459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.802605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.802630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.802747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.802886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.802910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.803028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.803139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.803163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.803275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.803384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.803408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.803580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.803746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.803770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.803932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.804240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.804527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.804816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.914 [2024-05-15 16:55:27.804991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.914 qpair failed and we were unable to recover it. 00:34:20.914 [2024-05-15 16:55:27.805152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.805322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.805364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.805526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.805669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.805695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.805836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.805977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.806146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.806487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.806832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.806974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.807117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.807238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.807270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.807455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.807631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.807655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.807830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.807991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.808155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.808459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.808807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.808948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.809058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.809197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.809236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.809423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.809587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.809612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.809742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.809879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.809903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.810020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.810134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.810159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.810332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.810517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.810544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.810709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.810846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.810871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.811013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.811189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.811224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.811402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.811552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.811579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.811733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.811872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.811897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.812037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.812188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.812212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.812415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.812569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.812624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.812758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.812918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.812943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.813087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.813249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.813302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.813490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.813652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.813676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.813842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.814198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.814537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.915 [2024-05-15 16:55:27.814878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.814993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.915 [2024-05-15 16:55:27.815019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.915 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.815164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.815309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.815334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.815498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.815641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.815668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.815821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.815945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.815972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.816129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.816315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.816343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.816497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.816647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.816674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.816837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.816998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.817025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.817176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.817338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.817386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.817571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.817714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.817739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.817916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.818050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.818075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.818193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.818353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.818378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.818547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.818694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.818721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.818901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.819025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.819053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.819223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.819366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.819406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.819591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.819729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.819753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.819903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.820213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.820512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.820828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.820991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.821106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.821239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.821265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.821431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.821629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.821653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.821788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.821925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.821950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.822064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.822209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.822242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.822385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.822594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.822619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.822753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.822864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.822889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.823068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.823208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.823239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.823406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.823532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.823556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.823721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.823886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.823911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.824058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.824200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.824249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.824375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.824499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.824527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.916 qpair failed and we were unable to recover it. 00:34:20.916 [2024-05-15 16:55:27.824662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-05-15 16:55:27.824801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.824825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.824967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.825078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.825102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.825245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.825400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.825428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.825608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.825735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.825763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.825929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.826072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.826097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.826240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.826379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.826403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.826570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.826710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.826736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.826877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.827159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.827441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.827810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.827942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.828099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.828254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.828294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.828449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.828572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.828601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.828731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.828893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.828918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1940779 Killed "${NVMF_APP[@]}" "$@" 00:34:20.917 [2024-05-15 16:55:27.829034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.829196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.829229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.829372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:20.917 [2024-05-15 16:55:27.829492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.829516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:20.917 [2024-05-15 16:55:27.829657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.917 [2024-05-15 16:55:27.829772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.829798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:20.917 [2024-05-15 16:55:27.829942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.917 [2024-05-15 16:55:27.830137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.830161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.830273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.830388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.830413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.830568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.830679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.830703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.830854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.831236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.831525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.831824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.831993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.832022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.832181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.832350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.832376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.832525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.832720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.832748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.832895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.833022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.833055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.833238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.833402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.833427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-05-15 16:55:27.833552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-05-15 16:55:27.833670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.833695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.833803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.833945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.833969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.834126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.834310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.834336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.834478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1941329 00:34:20.918 [2024-05-15 16:55:27.834623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.834664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1941329 00:34:20.918 [2024-05-15 16:55:27.834794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.834965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1941329 ']' 00:34:20.918 [2024-05-15 16:55:27.834993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.918 [2024-05-15 16:55:27.835149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.918 [2024-05-15 16:55:27.835319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.835344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.918 [2024-05-15 16:55:27.835463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.918 [2024-05-15 16:55:27.835627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.835668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 16:55:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.918 [2024-05-15 16:55:27.835802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.835971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.835996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.836109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.836242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.836293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.836432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.836579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.836627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.836819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.836944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.836968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.837119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.837257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.837283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.837396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.837511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.837537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.837650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.837790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.837819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.837991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.838112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.838140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.838310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.838433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.838457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.838631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.838736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.838761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.838892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.839072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.839099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.839239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.839359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.839385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.839525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.839690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.839717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.839879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.840233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.840543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.840839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.840973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.841000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.841157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.841309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.841334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-05-15 16:55:27.841471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.841594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-05-15 16:55:27.841619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.841764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.841951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.841978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.842163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.842291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.842316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.842432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.842619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.842643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.842765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.842928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.842955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.843097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.843239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.843264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.843380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.843500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.843524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.843640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.843749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.843773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.843917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.844179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.844508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.844836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.844966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.845078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.845181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.845206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.845329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.845486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.845513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.845643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.845757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.845781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.845945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.846100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.846127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.846298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.846413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.846438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.846574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.846712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.846753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.846945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.847082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.847106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.847236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.847396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.847420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.847563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.847699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.847743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.847910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.848050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.848074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.848237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.848398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.848422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.848584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.848715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.848739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.848876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.849061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.849087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.849227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.849335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.849360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-05-15 16:55:27.849533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-05-15 16:55:27.849689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.849714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.849854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.850012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.850038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.850188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.850361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.850386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.850561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.850715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.850741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.850887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.851248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.851529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.851803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.851969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.852143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.852303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.852328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.852435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.852594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.852618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.852733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.852912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.852937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.853079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.853214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.853245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.853359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.853496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.853520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.853627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.853768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.853793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.853936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.854100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.854130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.854284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.854402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.854429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.854582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.854709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.854735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.854874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.855208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.855498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.855758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.855895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.856012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.856173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.856199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.856324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.856435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.856460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.856575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.856740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.856764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.856907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.857189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.857520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.857821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.857990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.858109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.858263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.858289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.858455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.858602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.858627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-05-15 16:55:27.858764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-05-15 16:55:27.858923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.858948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.859088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.859194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.859226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.859368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.859538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.859563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.859712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.859828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.859852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.859962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.860250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.860528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.860803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.860933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.861038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.861321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.861584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.861870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.861988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.862183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.862474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.862788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.862955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.863070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.863214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.863249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.863391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.863532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.863557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.863698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.863842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.863866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.863983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.864288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.864550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.864853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.864982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.865122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.865234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.865258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.865373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.865516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.865540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.865656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.865797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.865821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.865971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.866113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.866138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.866286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.866421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.866447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.866617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.866730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.866756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.866904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.867042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.867067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.867181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.867323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-05-15 16:55:27.867348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-05-15 16:55:27.867470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.867593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.867618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.867730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.867868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.867892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.868012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.868153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.868177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.868292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.868455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.868479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.868594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.868733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.868757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.868896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.869202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.869497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.869786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.869953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.870091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.870264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.870290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.870408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.870537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.870561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.870673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.870837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.870861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.870974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.871285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.871566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.871874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.871995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.872164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.872450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.872750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.872887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.873034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.873196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.873226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.873367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.873482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.873508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.873651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.873789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.873815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.873979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.874289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.874566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.874882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.874995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.875164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.875485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.875752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.875942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-05-15 16:55:27.876083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.876227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-05-15 16:55:27.876253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.876397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.876559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.876584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.876730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.876892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.876916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.877053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.877174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.877198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.877358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.877467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.877492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.877637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.877782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.877806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.877943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.878256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.878567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.878837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.878992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.879105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.879225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.879250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.879367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.879484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.879509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.879648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.879783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.879808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.879970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.880275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.880529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.880733] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:34:20.923 [2024-05-15 16:55:27.880807] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.923 [2024-05-15 16:55:27.880829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.880992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.881151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.881456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.881700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.881875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.882016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.882190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.882222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.882389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.882498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.882522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.882666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.882785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.882809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.882925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.883060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.883084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.883202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.883361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.883385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-05-15 16:55:27.883503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-05-15 16:55:27.883623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.883648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.883767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.883939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.883963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.884102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.884239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.884264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.884407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.884515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.884541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.884680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.884838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.884862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.885028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.885165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.885190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.885336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.885451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.885475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.885615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.885754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.885779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.885944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.886108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.886133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.886307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.886438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.886463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.886607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.886744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.886773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.886910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.887241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.887496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.887800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.887991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.888152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.888291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.888316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.888459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.888605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.888629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.888751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.888892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.888916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.889032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.889168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.889192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.889313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.889463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.889487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.889650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.889789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.889817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.889932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.890233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.890515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.890800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.890933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.891089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.891226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.891251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-05-15 16:55:27.891389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.891501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-05-15 16:55:27.891525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.891668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.891779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.891803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.891947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.892257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.892567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.892838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.892985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.893101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.893214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.893245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.893413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.893553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.893578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.893708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.893850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.893874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.894012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.894258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.894533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.894845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.894987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-05-15 16:55:27.895011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-05-15 16:55:27.895118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.895231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.895257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.895373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.895519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.895548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.895701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.895865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.895890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.896033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.896168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.896192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.896341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.896456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.896480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.896643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.896778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.896802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.896909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.897186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.897469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.897790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.897980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.898121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.898240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.898267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.898421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.898585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.898609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.898756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.898873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.898897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.899039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.899180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.899204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.899349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.899460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.899484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.899606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.899740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.899764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.899913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.900184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.900542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.900841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.900989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.901135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.901284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.901309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.901451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.901597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.901622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.901767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.901883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.901909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.902027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.902169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.902193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.902372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.902493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.902519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.902662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.902798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.902824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.902937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.903250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.903556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-05-15 16:55:27.903822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-05-15 16:55:27.903964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.904109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.904251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.904276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.904393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.904540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.904564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.904698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.904817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.904841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.904961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.905240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.905531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.905814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.905973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.906112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.906223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.906249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.906387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.906522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.906547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.906687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.906826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.906852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.906969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.907293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.907545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.907844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.907981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.908173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.908477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.908766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.908941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.909068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.909181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.909206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.909385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.909525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.909550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.909673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.909819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.909843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.909989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.910159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.910184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.910303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.910442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.910467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.910584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.910726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.910750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.910898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.911035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.911060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.911197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.911366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.911391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.911534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.911693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.911717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.911859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.912149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.912449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-05-15 16:55:27.912709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-05-15 16:55:27.912848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.912872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.912988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.913152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.913177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.913306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.913449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.913473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.913645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.913784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.913808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.913942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.914246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.914531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.914799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.914962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.915094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.915210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.915241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.915355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.915501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.915526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.915666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.915780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.915805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.915969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.916240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.916520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.916825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.916957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.917094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.917201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.917232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.917341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.917479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.917503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.917643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.917781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.917806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.917944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.918271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.918557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.918840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.918999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.919116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.919255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.919279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.919421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.919535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.919559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.919667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.919781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.919806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.919912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.920035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.920059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-05-15 16:55:27.920184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-05-15 16:55:27.920309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.920336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.920477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.920619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.920644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.920771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.920877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.920901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.921032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.921166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.921192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.921338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.921474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.921498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.921639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.921775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.921799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.921915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.922080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.922104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.922241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.922369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.922394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.922532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.922672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.922696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.922861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.923166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.923425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.923753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.923916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.924036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.924146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.924173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.924305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.924447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.924471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.924610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.924750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.924775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.924917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.925191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.936 [2024-05-15 16:55:27.925528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.925837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.925980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.926119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.926419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.926718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.926857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.927025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.927164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.927189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.927345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.927462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.927488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.927602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.927766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.927790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.927926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.928234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.928529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-05-15 16:55:27.928784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-05-15 16:55:27.928924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.929043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.929184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.929209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.929339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.929454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.929479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.929619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.929787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.929811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.929952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.930231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.930504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.930844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.930984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.931121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.931388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.931673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.931812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.931954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.932209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.932489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.932818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.932963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.933075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.933189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.933213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.933362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.933504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.933528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.933690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.933826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.933851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.933994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.934249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.934500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.934783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.934922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.935060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.935179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.935203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.935354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.935462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.935486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.935626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.935768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.935793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.935908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.936177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.936454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.936727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.936896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.937013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.937144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.937168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-05-15 16:55:27.937286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-05-15 16:55:27.937404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.937430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.937558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.937699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.937724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.937836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.937972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.937995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.938134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.938266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.938290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.938430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.938540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.938564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.938703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.938813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.938837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.938975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.939246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.939557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.939860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.939997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.940113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.940250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.940275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.940395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.940536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.940560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.940700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.940857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.940881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.941043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.941160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.941185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.941359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.941501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.941525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.941658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.941759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.941783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.941897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.942189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.942483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.942808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.942968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.943084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.943201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.943233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.943343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.943450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.943475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.943585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.943725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.943749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.943870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.944232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.944535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.944843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.944972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.945137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.945279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.945304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.945421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.945560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.945589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.945726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.945869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.945893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-05-15 16:55:27.946045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-05-15 16:55:27.946211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.946241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.946380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.946488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.946513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.946632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.946777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.946802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.946924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.947244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.947495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.947787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.947947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.948091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.948229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.948254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.948363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.948494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.948523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.948666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.948783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.948807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.948921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.949181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.949500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.949771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.949901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.950052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.950189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.950213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.950381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.950496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.950520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.950687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.950817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.950841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.950959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.951255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.951504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.951812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.951959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.952102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.952266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.952292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.952404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.952549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.952574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.952721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.952885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.952909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.953063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.953201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.953234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.953370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.953487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.953511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.953617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.953756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.953780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-05-15 16:55:27.953893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.954010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-05-15 16:55:27.954037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.954200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.954343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.954368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.954516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.954655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.954679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.954799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.954934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.954959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.955109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.955226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.955251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.955368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.955474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.955499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.955605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.955712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.955736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.955883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.956182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.956488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.956797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.956967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.957111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.957245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.957270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.957382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.957517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.957542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.957659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.957800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.957824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.957992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.958122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.958146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.958312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.958475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.958500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.958634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.958748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.958772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.958882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.959188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.959494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.959836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.959971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.960089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.960242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.960267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.960410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.960522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.960546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.960648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.960758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.960783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.960929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.961231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.961509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-05-15 16:55:27.961788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-05-15 16:55:27.961922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.961947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.962084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.962225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.962250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.962375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.962514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.962538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.962652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.962800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.962825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.962991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.963250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.963534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.941 [2024-05-15 16:55:27.963693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.963842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.963979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.964146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.964481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.964738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.964932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.965068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.965204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.965236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.965350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.965464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.965488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.965627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.965760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.965785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.965928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.966254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.966552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.966859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.966974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.967149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.967423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.967711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.967848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.967963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.968275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.968554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.968841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.968981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.969155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.969402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.969685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.969824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.969960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.970079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.970103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.970240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.970355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-05-15 16:55:27.970380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-05-15 16:55:27.970494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.970629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.970653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.970815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.970932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.970956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.971105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.971225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.971251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.971363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.971463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.971487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.971659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.971774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.971799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.971941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.972212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.972477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.972752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.972891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.973007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.973148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.973173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.973292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.973405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.973430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.973566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.973736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.973760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.973899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.974175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.974486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.974769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.974935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.975054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.975195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.975227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.975348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.975466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.975491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.975634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.975774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.975799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.975937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.976213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.976549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.976861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.976994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.977187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.977313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.977338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.977474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.977587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.977612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.977755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.977874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.977898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.978013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.978155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.978181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.978340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.978458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.978483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.978650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.978788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.978812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-05-15 16:55:27.978937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.979074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-05-15 16:55:27.979098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.979239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.979359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.979383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.979559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.979694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.979720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.979883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.980190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.980471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.980757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.980926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.981059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.981195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.981261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.981411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.981523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.981548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.981686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.981819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.981843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.982016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.982149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.982174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.982350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.982519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.982544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.982654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.982794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.982820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.982969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.983110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.983135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.983257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.983398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.983423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.983542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.983680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.983705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.983861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.984194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.984474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.984753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.984918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.985035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.985193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.985226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.985373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.985486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.985511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.985626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.985766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.985792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.985934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.986266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.986539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.986816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.986985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.987096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.987227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.987253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.987370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.987480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.987505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.987647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.987811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.987835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-05-15 16:55:27.987977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.988146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-05-15 16:55:27.988170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.988335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.988456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.988481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.988617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.988754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.988779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.988946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.989240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.989545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.989808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.989953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.990066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.990209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.990242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.990373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.990493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.990518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.990660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.990775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.990801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.990912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.991178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.991469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.991800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.991934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.992049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.992182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.992207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.992354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.992478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.992503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.992649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.992796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.992821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.992968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.993293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.993573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.993843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.993984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.994147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.994432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.994743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.994901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.995014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.995155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.995181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.995346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.995517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.995543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-05-15 16:55:27.995689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.995833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-05-15 16:55:27.995862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.995971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.996247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.996571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.996866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.996983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.997117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.997408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.997651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.997823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.997967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.998105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.998129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.998246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.998355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.998380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.998547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.998685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.998714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.998883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.999194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.999460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:27.999755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:27.999917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.000039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.000318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.000576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.000849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.000993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.001134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.001272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.001299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.001415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.001530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.001558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.001704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.001822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.001846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.001987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.002128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.002152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.002291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.002429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.002454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.002625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.002742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.002767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.002883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.003187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.003464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.003743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.003913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.004033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.004166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.004191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-05-15 16:55:28.004326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-05-15 16:55:28.004466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.004495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.004639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.004757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.004783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.004925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.005071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.005096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.005262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.005403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.005428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.005603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.005741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.005767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.005891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.006208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.006493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.006751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.006913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.007064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.007200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.007230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.007371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.007479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.007504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.007620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.007736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.007760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.007881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.008193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.008500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.008808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.008968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.009087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.009236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.009262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.009434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.009549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.009574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.009715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.009847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.009872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.010008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.010270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.010554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.010849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.010984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.011103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.011246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.011271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.011404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.011538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.011564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.011703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.011812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.011836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.011952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.012263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.012533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.012858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-05-15 16:55:28.012993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-05-15 16:55:28.013106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.013225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.013251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.013369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.013508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.013534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.013644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.013762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.013788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.013930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.014221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.014549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.014829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.014977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.015117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.015408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.015721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.015858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.016001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.016108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.016133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.016279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.016432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.016458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.016571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.016725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.016750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.016912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.017229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.017511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.017813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.017980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.018121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.018265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.018291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.018459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.018577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.018602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.018740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.018852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.018877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.018993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.019307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.019593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.019849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.019983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.020116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.020251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.020276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.020412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.020524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.020548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.020691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.020839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.020863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.021000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.021143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.021168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.021335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.021451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.021477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.021611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.021776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-05-15 16:55:28.021801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-05-15 16:55:28.021962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.022273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.022575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.022851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.022995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.023137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.023421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.023677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.023812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.023966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.024267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.024570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.024817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.024985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.025110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.025256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.025281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.025424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.025549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.025574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.025706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.025879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.025904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.026043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.026187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.026212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.026330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.026442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.026468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.026582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.026724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.026749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.026889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.027172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.027468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.027752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.027887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.028026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.028340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.028619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.028869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.028990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.029014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.029152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.029293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-05-15 16:55:28.029318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-05-15 16:55:28.029466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.029631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.029655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.029799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.029900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.029923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.030063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.030194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.030223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.030389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.030523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.030547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.030661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.030782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.030805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.030945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.031201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.031534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.031802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.031951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.032059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.032198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.032236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.032358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.032489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.032514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.032659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.032774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.032804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.032945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.033199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.033473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.033743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.033907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.034024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.034169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.034193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.034340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.034485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.034510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.034620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.034734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.034759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.034907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.035069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.035094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.035241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.035386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.035412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.035537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.035681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.035706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.035845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.036175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.036463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.036738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.036883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.037025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.037140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.037167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.037325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.037459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.037484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.037607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.037758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.037783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-05-15 16:55:28.037920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.038032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-05-15 16:55:28.038056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.038210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.038370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.038395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.038509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.038676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.038701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.038843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.038956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.038981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.039123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.039274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.039300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.039413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.039554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.039579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.039719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.039863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.039888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.040023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.040158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.040183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.040366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.040499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.040524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.040663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.040777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.040803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.040939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.041211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.041507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.041766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.041922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.042060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.042166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.042191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.042310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.042453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.042478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.042598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.042762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.042791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.042935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.043212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.043503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.043788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.043953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.044114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.044249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.044275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.044393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.044612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.044638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.044773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.044893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.044918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.045041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.045177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.045203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.045349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.045467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.045492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.045604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.045740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.045769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.045883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.046208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.046531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-05-15 16:55:28.046813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-05-15 16:55:28.046956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.047073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.047213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.047246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.047371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.047509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.047533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.047682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.047818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.047843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.047989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.048208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.048241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.048396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.048549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.048575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.048736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.048849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.048877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.049019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.049271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.049535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.049793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.049931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.050035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.050143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.050167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.050284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.050322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.951 [2024-05-15 16:55:28.050355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.951 [2024-05-15 16:55:28.050370] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.951 [2024-05-15 16:55:28.050384] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.951 [2024-05-15 16:55:28.050395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.951 [2024-05-15 16:55:28.050404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.050428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.050457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:20.951 [2024-05-15 16:55:28.050517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:20.951 [2024-05-15 16:55:28.050576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.050487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:20.951 [2024-05-15 16:55:28.050521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:20.951 [2024-05-15 16:55:28.050689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.050713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.050852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.051147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.051405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.051681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.051822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.051950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.052191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.052498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.052807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.052947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.053060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.053247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.053273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.053388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.053538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.053563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.053669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.053784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.053810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.053934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.054068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.054093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.054205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.054353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.054379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.054496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.054607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-05-15 16:55:28.054633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-05-15 16:55:28.054752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.054889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.054914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.055031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.055134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.055159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.055310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.055482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.055507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.055614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.055752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.055777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.055890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.056151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.056429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.056716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.056878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.056992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.057168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.057192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.057321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.057490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.057515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.057627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.057761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.057785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.057925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.058223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.058479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.058749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.058910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.059031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.059199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.059231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.059352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.059464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.059494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.059651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.059767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.059792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.059927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.060046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.060070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.060196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.060367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.060393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.060509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.060646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.060670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.060889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.061162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.061540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.061819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.061955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.062097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.062239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.062265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.062407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.062544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.062573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.062709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.062822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.062847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.062990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.063156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.063181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-05-15 16:55:28.063301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.063412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-05-15 16:55:28.063437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.063546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.063655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.063680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.063783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.064165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.064423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.064725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.064883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.065015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.065262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.065517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.065807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.065947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.066094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.066204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.066326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.066442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.066578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.066602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.066723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.066833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.066859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.066989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.067272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.067591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.067830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.067997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.068170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.068287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.068317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.068449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.068562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.068587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.068707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.068815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.068840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.068978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.069111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.069136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.069285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.069400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.069426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.069539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.069646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.069671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.069819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.070228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.070498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.070777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.070995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-05-15 16:55:28.071019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-05-15 16:55:28.071159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.071269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.071295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.071450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.071558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.071583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.071728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.071836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.071861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.071996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.072255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.072548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.072803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.072942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.073164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.073306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.073333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.073474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.073580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.073605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.073719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.073857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.073881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.073994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.074109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.074136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.074256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.074378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.074405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.074519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.074634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.074659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.074795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.075155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.075430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.075685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.075814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.075953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.076200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.076474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.076716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.076847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.077071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.077174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.077199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.077380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.077517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.077542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.077666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.077814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.077839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.077983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.078275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.078563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.078811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.078947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.079087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.079227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.079252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-05-15 16:55:28.079363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.079489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-05-15 16:55:28.079514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.079631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.079769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.079793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.079943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.080201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.080493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.080791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.080950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.081060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.081194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.081226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.081370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.081480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.081507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.081626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.081757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.081782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.081902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.082247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.082557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.082866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.082978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.083116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.083376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.083635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.083778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.083902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.084162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.084409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.084655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.084789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.084924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.085205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.085476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.085729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.085877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.086014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.086296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.086575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.086822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.086969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.087084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.087227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.087252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.087395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.087506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.087531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-05-15 16:55:28.087653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.087769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-05-15 16:55:28.087794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.087934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.088195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.088467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.088717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.088877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.088982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.089201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.089232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.089452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.089592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.089617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.089730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.089893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.089918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.090032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.090335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.090622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.090883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.090992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.091145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.091530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.091790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.091984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.092125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.092283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.092308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.092426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.092548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.092573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.092698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.092873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.092898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.093018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.093130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.093156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.093307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.093445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.093470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.093582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.093800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.093825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.093958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.094248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.094505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.094755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.094975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.095127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.095403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.095653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.095874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.095983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.096101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.096126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.096269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.096377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.096402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-05-15 16:55:28.096514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-05-15 16:55:28.096632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.096658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.096782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.096896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.096922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.097068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.097375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.097620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.097883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.097990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.098190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.098484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.098752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.098912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.099055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.099163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.099188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.099308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.099460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.099484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.099598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.099733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.099758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.099922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.100174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.100538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.100812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.100958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.101068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.101231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.101257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.101402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.101619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.101645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.101755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.101883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.101909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.102034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.102285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.102585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.102841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.102985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.103121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.103233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.103258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.103367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.103480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.103505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.103616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.103757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.103782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-05-15 16:55:28.104001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-05-15 16:55:28.104117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.104263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.104520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.104773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.104911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.105031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.105166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.105191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.105315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.105435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.105459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.105599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.105711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.105740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.105895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.106240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.106523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.106853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.106995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.107110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.107255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.107281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.107434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.107555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.107580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.107688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.107827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.107851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.107965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.108251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.108497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.108822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.108979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.109133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.109286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.109311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.109446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.109588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.109614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.109736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.109851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.109876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.109995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.110255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.110538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.110822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.110962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.111097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.111204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.111235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.111381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.111487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.111516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.111657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.111785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.111810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.111929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.112039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.112064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.112239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.112363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.112388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-05-15 16:55:28.112514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-05-15 16:55:28.112622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.112647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.112769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.112873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.112897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.113010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.113255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.113518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.113812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.113944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.114079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.114244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.114274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.114383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.114501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.114526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.114643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.114769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.114794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.114928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.115197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.115460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.115701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.115850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.115975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.116280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.116528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.116807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.116971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.117081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.117208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.117240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.117357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.117470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.117495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.117612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.117715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.117739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.117849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.118159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.118411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.118688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.118831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.118947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.119057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.119081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.119227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.119350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-05-15 16:55:28.119376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-05-15 16:55:28.119491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.119605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.119630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.119753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.119896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.119921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.120047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.120307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.120584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.120873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.120985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.121118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.121413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.121710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.121844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.121973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.122243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.122562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.122853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.122988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.123109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.123278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.123303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.123438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.123549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.123574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.123708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.123828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.123853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.123997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.124109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.124133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.124254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.124388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.124413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.229 [2024-05-15 16:55:28.124552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.124665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.229 [2024-05-15 16:55:28.124690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.229 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.124813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.124948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.124973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.125089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.125238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.125265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.125380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.125501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.125526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.125708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.125856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.125880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.125996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.126275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.126523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.126812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.126978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.127086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.127232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.127257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.127380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.127494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.127518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.127636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.127749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.127774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.127885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.128173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.128493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.128819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.128973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.129128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.129434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.129688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.129831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.129937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.130199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.130470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.130748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.130920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.131058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.131344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.131598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.131867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.131986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.132153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.132421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.230 [2024-05-15 16:55:28.132707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.230 [2024-05-15 16:55:28.132871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.230 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.132980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.133241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.133515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.133797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.133960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.134121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.134234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.134259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.134403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.134513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.134537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.134645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.134755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.134780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.134895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.135185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.135482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.135756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.135892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.136007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.136284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.136559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.136819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.136987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.137103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.137363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.137635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.137877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.137995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.138168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.138442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.138717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.138849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.138961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.139238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.139499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.139775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.139939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.140076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.140180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.140206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.140322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.140439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.140464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.140608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.140771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.140795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.231 qpair failed and we were unable to recover it. 00:34:21.231 [2024-05-15 16:55:28.140907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.231 [2024-05-15 16:55:28.141043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.141181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.141448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.141736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.141868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.142033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.142171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.142195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.142347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.142449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.142474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.142606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.142747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.142771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.142875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.143157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.143452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.143706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.143869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.143983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.144263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.144540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.144817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.144959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.145091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.145197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.145229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.145350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.145467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.145491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.145599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.145742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.145768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.145906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.146143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.146426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.146761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.146901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.147015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.147152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.147176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.147300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.147432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.147458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.147600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.147736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.147765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.147907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.148179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.148468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.148733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.148894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.149030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.149155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.232 [2024-05-15 16:55:28.149180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.232 qpair failed and we were unable to recover it. 00:34:21.232 [2024-05-15 16:55:28.149305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.149417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.149441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.149604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.149720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.149746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.149889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.149995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.150136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.150398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.150652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.150783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.150887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.151169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.151445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.151726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.151866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.151974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.152259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.152517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.152792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.152918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.153039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.153301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.153564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.153810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.153946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.154059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.154211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.154271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.154413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.154524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.154548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.154685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.154800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.154825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.154941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.155226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.155469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.233 qpair failed and we were unable to recover it. 00:34:21.233 [2024-05-15 16:55:28.155765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.233 [2024-05-15 16:55:28.155910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.155949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.156059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.156195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.156227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.156348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.156463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.156488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.156623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.156791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.156816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.156921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.157190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.157477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.157725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.157869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.157988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.158271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.158555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.158798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.158970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.159107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.159242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.159268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.159373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.159511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.159535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.159646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.159753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.159778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.159896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.160204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.160462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.160735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.160906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.161043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.161301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.161563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.161853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.161993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.162160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.162271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.162297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.162414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.162552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.162576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.162722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.162827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.162851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.162960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.163203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.163514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.163780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.163922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.234 qpair failed and we were unable to recover it. 00:34:21.234 [2024-05-15 16:55:28.164034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.234 [2024-05-15 16:55:28.164143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.164167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.164297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.164407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.164432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.164552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.164701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.164727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.164867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.164983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.165123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.165381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.165658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.165805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.165931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.166211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.166468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.166744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.166880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.167022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.167160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.167186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.167337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.167443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.167468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.167587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.167725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.167750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.167892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.168153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.168400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.168711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.168881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.168997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.169273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.169567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.169846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.169982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.170124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.170398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.170667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.235 [2024-05-15 16:55:28.170813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.235 qpair failed and we were unable to recover it. 00:34:21.235 [2024-05-15 16:55:28.170923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.171180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.171475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.171769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.171959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.172091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.172202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.172255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.172373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.172494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.172519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.172653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.172765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.172791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.172910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.173162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.173422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.173713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.173876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.174040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.174326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.174604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.174873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.174977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.175124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.175416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.175705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.175833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.175949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.176238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.176515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.176804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.176938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.177055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.177169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.177194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.177352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.177463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.177488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.177599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.177733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.177758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.177870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.178154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.178421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.178696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.178827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.236 qpair failed and we were unable to recover it. 00:34:21.236 [2024-05-15 16:55:28.178941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.236 [2024-05-15 16:55:28.179067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.179208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.179485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.179763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.179898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.180035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.180286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.180552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.180809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.180960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.181099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.181244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.181269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.181411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.181524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.181550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.181660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.181771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.181797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.181903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.182183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.182483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.182765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.182895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.183005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.183290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.183554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.183808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.183972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.184077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.184212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.184244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.184353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.184468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.184493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.184599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.184765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.184789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.184929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.185178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.185443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.185703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.185876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.185982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.186277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.186588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.237 [2024-05-15 16:55:28.186838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:21.237 [2024-05-15 16:55:28.186955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.237 [2024-05-15 16:55:28.186979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.237 qpair failed and we were unable to recover it. 00:34:21.238 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:21.238 [2024-05-15 16:55:28.187124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.238 [2024-05-15 16:55:28.187231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.187258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:21.238 [2024-05-15 16:55:28.187368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.238 [2024-05-15 16:55:28.187505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.187530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.187669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.187781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.187805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.187930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.188207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.188469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.188750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.188893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.189036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.189311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.189574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.189821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.189950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.190064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.190173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.190198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.190340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.190484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.190508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.190639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.190773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.190798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.190913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.191191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.191481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.191772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.191938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.192075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.192354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.192613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.192884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.192997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-05-15 16:55:28.193023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-05-15 16:55:28.193132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.193261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.193287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.193400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.193534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.193560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.193676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.193812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.193837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.193957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.194210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.194496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.194752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.194886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.195008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.195276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.195570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.195812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.195983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.196094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.196359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.196626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.196883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.196997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.197137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.197423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.197672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.197840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.197945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.198201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.198465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.198767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.198911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.199023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.199298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.199545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.199804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.199943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.200065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.200211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.200243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.200360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.200497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.200522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.200640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.200767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.200792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-05-15 16:55:28.200930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.201044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-05-15 16:55:28.201068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.201180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.201311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.201338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.201453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.201614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.201638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.201776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.201925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.201949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.202116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.202251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.202277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.202395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.202543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.202568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.202709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.202841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.202866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.203011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.203291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.203571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.203822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.203960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.204077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.204241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.204266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.204378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.204495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.204520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.204660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.204773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.204798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.204907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.205150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.205426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.205719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.205858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.205970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.206286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.206545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.206839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.206999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.207105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.207247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.207273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.207387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.207499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.207524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.207645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.207768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.207793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.207913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.208187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-05-15 16:55:28.208470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.240 [2024-05-15 16:55:28.208750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-05-15 16:55:28.208880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:21.240 [2024-05-15 16:55:28.208992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.240 [2024-05-15 16:55:28.209105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.241 [2024-05-15 16:55:28.209273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.209523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.209790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.209929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.210059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.210324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.210566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.210850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.210990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.211126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.211297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.211323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.211444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.211552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.211577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.211687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.211850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.211874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.211987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.212257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.212516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.212790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.212924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.213072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.213355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.213602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.213865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.213978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.214163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.214410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.214661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.214820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.214928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.215182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.215462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.215741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.215917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.216032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.216292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.216560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-05-15 16:55:28.216829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-05-15 16:55:28.216981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.217096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.217240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.217266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.217408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.217521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.217546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.217669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.217782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.217808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.217970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.218334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.218619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.218878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.218991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.219133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.219398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.219653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.219840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.219987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.220266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.220531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.220859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.220994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.221132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.221382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.221698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.221856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.222051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.222169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.222195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.222321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.222439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.222464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.222602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.222739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.222763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.222899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.223169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.223475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.223753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.223891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-05-15 16:55:28.224027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.224162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-05-15 16:55:28.224187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.224302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.224445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.224469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.224584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.224711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.224735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.224872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.224979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.225128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.225400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.225650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.225809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.225959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.226231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.226573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.226829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.226974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.227144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.227275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.227304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.227418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.227555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.227579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.227717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.227855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.227880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.228001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.228285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.228576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.228855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.228995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.229107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.229213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.229245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.229409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.229516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.229542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.229656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.229799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.229823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.229938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.230235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.230479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.230747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.230940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.231089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.231198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.231231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.231349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.231471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.231496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.231640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.231773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.231798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.231908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.232020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.232045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.232189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.232339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.232364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-05-15 16:55:28.232488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-05-15 16:55:28.232593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.232618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.232732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.232853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.232883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 Malloc0 00:34:21.244 [2024-05-15 16:55:28.233001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.233130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.233155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.233278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.244 [2024-05-15 16:55:28.233396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.233420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:21.244 [2024-05-15 16:55:28.233557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.244 [2024-05-15 16:55:28.233705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.244 [2024-05-15 16:55:28.233730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.233835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.233996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.234140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.234389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.234644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.234810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.234925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.235176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.235461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.235728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.235864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.236000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.236272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.236572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.244 [2024-05-15 16:55:28.236718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.236881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.236988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.237131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.237433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.237708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.237850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.237957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.238274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.238537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.238801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.238979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.239096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.239207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.239238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.239345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.239454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.239478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.239606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.239744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.239769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-05-15 16:55:28.239905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-05-15 16:55:28.240021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.240162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.240451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.240716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.240888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.240999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.241309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.241556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.241870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.241990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.242139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.242402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.242694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.242857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.242970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.243081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.243106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.243247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.243414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.243439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.243594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.243760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.243785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.243895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.244164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.244438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.244734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.244866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.245 [2024-05-15 16:55:28.244975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.245 [2024-05-15 16:55:28.245087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.245111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.245 [2024-05-15 16:55:28.245255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.245 [2024-05-15 16:55:28.245362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.245387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.245498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.245615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.245640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-05-15 16:55:28.245758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-05-15 16:55:28.245866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.245892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.246006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.246270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.246518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.246766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.246895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.247018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.247131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.247156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.247281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.247407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.247432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.247599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.247741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.247765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.247900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.248187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.248474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.248763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.248913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.249024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.249158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.249182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.249327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.249471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.249496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.249632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.249741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.249766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.249887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.250166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.250427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.250683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.250834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.250971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.251077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-05-15 16:55:28.251101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-05-15 16:55:28.251220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.251333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.251358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.251471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.251592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.251617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.251729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.251840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.251865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.252003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.252161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.252185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.252309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.252418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.252443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.252581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.252721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.252747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.252856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.247 [2024-05-15 16:55:28.252992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.253017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.247 [2024-05-15 16:55:28.253129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.247 [2024-05-15 16:55:28.253242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.253267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.247 [2024-05-15 16:55:28.253409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.253529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.253554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.253672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.253820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.253844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.253982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.254228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.254535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.254806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.254971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.255080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.255325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.255583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.255833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.255975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-05-15 16:55:28.256099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-05-15 16:55:28.256205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.256236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.256385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.256494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.256518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.256642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.256775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.256799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.256916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.257214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.257527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.257796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.257960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.258124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.258232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.258258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.258388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.258505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.258529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.258663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.258770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.258795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.258935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.259240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.259503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.259749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.259882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-05-15 16:55:28.260001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.260108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-05-15 16:55:28.260132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.260257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.260401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.260425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.260555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.260669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.260693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.249 [2024-05-15 16:55:28.260970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.260995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.249 [2024-05-15 16:55:28.261138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.249 [2024-05-15 16:55:28.261255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.261281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.249 [2024-05-15 16:55:28.261402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.261518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.261543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.261660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.261781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.261805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.261940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.262239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.262518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.262792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.262930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.263038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.263172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.263197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.263316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.263433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.263457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.263591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.263710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.263734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.263855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.264178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.264441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6f4c000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-05-15 16:55:28.264658] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:21.249 [2024-05-15 16:55:28.264730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-05-15 16:55:28.264935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.249 [2024-05-15 16:55:28.267860] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:34:21.249 [2024-05-15 16:55:28.267924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f6f4c000b90 (107): Transport endpoint is not connected 00:34:21.249 [2024-05-15 16:55:28.267991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.250 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.250 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.250 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.250 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.250 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.250 16:55:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1940919 00:34:21.250 [2024-05-15 16:55:28.277316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.277515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.277546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.277565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.277578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.277608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.287257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.287373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.287400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.287416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.287429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.287458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.297330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.297447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.297475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.297495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.297509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.297538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.307279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.307410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.307437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.307452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.307465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.307494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.317253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.317365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.317391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.317407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.317419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.317449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.327324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.327446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.327474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.327489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.327502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.327532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.337313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.337435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.337462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.337479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.337492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.337522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.347377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.347492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.250 [2024-05-15 16:55:28.347517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.250 [2024-05-15 16:55:28.347533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.250 [2024-05-15 16:55:28.347545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.250 [2024-05-15 16:55:28.347575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-05-15 16:55:28.357392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.250 [2024-05-15 16:55:28.357508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.357535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.357550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.357563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.357593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.367491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.367622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.367649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.367663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.367676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.367705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.377412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.377532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.377558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.377573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.377586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.377615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.387453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.387578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.387603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.387633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.387646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.387675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.397487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.397602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.397629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.397644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.397657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.397687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.407519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.407630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.407656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.407671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.407685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.407715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.417518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.417638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.417665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.417680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.417693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.417723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.427595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.427713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.427739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.427754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.427767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.427797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-05-15 16:55:28.437626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.251 [2024-05-15 16:55:28.437743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.251 [2024-05-15 16:55:28.437769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.251 [2024-05-15 16:55:28.437784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.251 [2024-05-15 16:55:28.437798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.251 [2024-05-15 16:55:28.437827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.510 [2024-05-15 16:55:28.447623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.510 [2024-05-15 16:55:28.447741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.510 [2024-05-15 16:55:28.447768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.510 [2024-05-15 16:55:28.447784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.510 [2024-05-15 16:55:28.447797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.510 [2024-05-15 16:55:28.447826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.510 qpair failed and we were unable to recover it. 00:34:21.510 [2024-05-15 16:55:28.457642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.510 [2024-05-15 16:55:28.457761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.510 [2024-05-15 16:55:28.457787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.457803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.457816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.457845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.467674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.467789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.467815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.467829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.467842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.467872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.477690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.477801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.477831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.477847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.477859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.477889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.487724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.487850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.487877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.487893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.487906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.487935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.497804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.497970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.497998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.498014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.498031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.498062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.507788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.507907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.507933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.507949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.507962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.507992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.517850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.518016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.518043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.518061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.518075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.518111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.527845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.527954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.527981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.527997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.528009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.528039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.537878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.538000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.538027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.538042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.538055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.538085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.547916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.548035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.548061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.548076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.548090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.548131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.557933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.558044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.558071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.558086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.558099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.558129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.568002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.568117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.568148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.568165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.568178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.568207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.578018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.578134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.578160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.578175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.578188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.578224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.588103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.588234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.511 [2024-05-15 16:55:28.588260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.511 [2024-05-15 16:55:28.588276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.511 [2024-05-15 16:55:28.588292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.511 [2024-05-15 16:55:28.588323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.511 qpair failed and we were unable to recover it. 00:34:21.511 [2024-05-15 16:55:28.598060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.511 [2024-05-15 16:55:28.598172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.598196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.598211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.598231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.598269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.608092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.608228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.608255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.608271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.608293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.608336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.618088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.618211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.618246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.618273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.618286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.618316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.628136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.628252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.628278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.628292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.628306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.628335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.638129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.638252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.638279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.638295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.638308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.638338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.648167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.648328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.648355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.648370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.648383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.648413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.658235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.658371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.658397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.658412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.658425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.658461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.668251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.668370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.668397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.668417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.668431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.668460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.678240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.678364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.678389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.678404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.678417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.678447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.688325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.688448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.688475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.688491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.688504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.688533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.698308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.698421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.698458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.698473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.698492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.698522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.708335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.708502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.708528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.708543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.708566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.708595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.718361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.718479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.718504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.718519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.718532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.718561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.512 [2024-05-15 16:55:28.728472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.512 [2024-05-15 16:55:28.728637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.512 [2024-05-15 16:55:28.728664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.512 [2024-05-15 16:55:28.728679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.512 [2024-05-15 16:55:28.728692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.512 [2024-05-15 16:55:28.728721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.512 qpair failed and we were unable to recover it. 00:34:21.771 [2024-05-15 16:55:28.738471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.771 [2024-05-15 16:55:28.738589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.771 [2024-05-15 16:55:28.738613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.771 [2024-05-15 16:55:28.738628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.771 [2024-05-15 16:55:28.738641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.771 [2024-05-15 16:55:28.738670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.771 qpair failed and we were unable to recover it. 00:34:21.771 [2024-05-15 16:55:28.748439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.748557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.748584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.748599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.748611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.748641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.758470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.758588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.758613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.758628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.758641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.758670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.768486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.768596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.768621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.768635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.768648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.768677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.778555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.778689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.778713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.778728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.778741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.778770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.788590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.788711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.788736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.788756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.788770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.788799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.798583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.798703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.798727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.798745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.798759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.798788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.808617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.808725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.808750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.808764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.808777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.808807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.818696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.818869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.818894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.818909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.818922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.818950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.828701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.828820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.828846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.828862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.828875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.828904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.838730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.838848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.838873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.838887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.838900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.838929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.848754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.848910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.848938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.848956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.848971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.849013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.858764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.858880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.858907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.858922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.858935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.858964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.868774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.868891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.868917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.868931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.868945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.868974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.772 [2024-05-15 16:55:28.878821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.772 [2024-05-15 16:55:28.878941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.772 [2024-05-15 16:55:28.878973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.772 [2024-05-15 16:55:28.878992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.772 [2024-05-15 16:55:28.879006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.772 [2024-05-15 16:55:28.879035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.772 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.888857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.888972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.888997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.889012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.889024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.889066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.898892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.899013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.899040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.899055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.899068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.899098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.908896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.909040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.909066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.909081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.909094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.909124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.918965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.919083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.919108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.919124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.919136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.919171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.928957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.929072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.929099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.929113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.929126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.929169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.938995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.939118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.939144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.939159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.939172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.939212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.949016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.949136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.949162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.949178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.949191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.949229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.959036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.959144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.959169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.959184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.959197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.959236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.969098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.969255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.969289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.969308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.969323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.969354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.979134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.979263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.979291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.979307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.979319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.979349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:21.773 [2024-05-15 16:55:28.989164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.773 [2024-05-15 16:55:28.989296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.773 [2024-05-15 16:55:28.989323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.773 [2024-05-15 16:55:28.989342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.773 [2024-05-15 16:55:28.989357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:21.773 [2024-05-15 16:55:28.989387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.773 qpair failed and we were unable to recover it. 00:34:22.032 [2024-05-15 16:55:28.999198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.032 [2024-05-15 16:55:28.999320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.032 [2024-05-15 16:55:28.999347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.032 [2024-05-15 16:55:28.999361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:28.999374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:28.999404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.009194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.009313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.009339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.009354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.009367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.009402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.019235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.019359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.019386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.019401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.019414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.019443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.029254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.029384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.029410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.029426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.029438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.029468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.039266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.039380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.039406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.039421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.039434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.039463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.049332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.049442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.049467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.049483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.049496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.049525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.059359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.059491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.059518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.059537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.059551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.059582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.069480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.069614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.069641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.069656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.069669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.069698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.079433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.079541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.079568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.079582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.079595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.079624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.089459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.089575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.089599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.089614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.089628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.089657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.099512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.099636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.099662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.099677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.099696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.099726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-05-15 16:55:29.109514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.033 [2024-05-15 16:55:29.109638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.033 [2024-05-15 16:55:29.109665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.033 [2024-05-15 16:55:29.109680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.033 [2024-05-15 16:55:29.109693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.033 [2024-05-15 16:55:29.109737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.119581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.119700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.119727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.119742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.119756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.119785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.129521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.129637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.129663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.129678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.129691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.129721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.139579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.139711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.139737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.139752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.139765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.139795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.149623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.149744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.149770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.149785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.149798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.149828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.159656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.159785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.159812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.159827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.159839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.159869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.169672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.169789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.169816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.169831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.169843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.169872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.179718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.179837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.179864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.179879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.179892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.179933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.189742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.189862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.189888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.189908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.189922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.189951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.199747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.199914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.199941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.199956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.199969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f4c000b90 00:34:22.034 [2024-05-15 16:55:29.200011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.209785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.209911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.209943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.209959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.209972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.034 [2024-05-15 16:55:29.210001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.219840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.219988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.220017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.220032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.220045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.034 [2024-05-15 16:55:29.220073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.229902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.230024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.230052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.230068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.230080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.034 [2024-05-15 16:55:29.230108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.239872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.239988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.034 [2024-05-15 16:55:29.240016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.034 [2024-05-15 16:55:29.240032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.034 [2024-05-15 16:55:29.240045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.034 [2024-05-15 16:55:29.240073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-05-15 16:55:29.249910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.034 [2024-05-15 16:55:29.250027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.035 [2024-05-15 16:55:29.250054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.035 [2024-05-15 16:55:29.250070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.035 [2024-05-15 16:55:29.250083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.035 [2024-05-15 16:55:29.250111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.259932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.294 [2024-05-15 16:55:29.260052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.294 [2024-05-15 16:55:29.260080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.294 [2024-05-15 16:55:29.260096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.294 [2024-05-15 16:55:29.260109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.294 [2024-05-15 16:55:29.260137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.294 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.270017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.294 [2024-05-15 16:55:29.270151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.294 [2024-05-15 16:55:29.270179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.294 [2024-05-15 16:55:29.270195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.294 [2024-05-15 16:55:29.270208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.294 [2024-05-15 16:55:29.270246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.294 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.279961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.294 [2024-05-15 16:55:29.280077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.294 [2024-05-15 16:55:29.280104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.294 [2024-05-15 16:55:29.280125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.294 [2024-05-15 16:55:29.280138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.294 [2024-05-15 16:55:29.280167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.294 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.290020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.294 [2024-05-15 16:55:29.290160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.294 [2024-05-15 16:55:29.290187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.294 [2024-05-15 16:55:29.290202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.294 [2024-05-15 16:55:29.290221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.294 [2024-05-15 16:55:29.290252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.294 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.300047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.294 [2024-05-15 16:55:29.300225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.294 [2024-05-15 16:55:29.300252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.294 [2024-05-15 16:55:29.300267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.294 [2024-05-15 16:55:29.300280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.294 [2024-05-15 16:55:29.300308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.294 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.310039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.294 [2024-05-15 16:55:29.310160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.294 [2024-05-15 16:55:29.310189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.294 [2024-05-15 16:55:29.310204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.294 [2024-05-15 16:55:29.310226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.294 [2024-05-15 16:55:29.310257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.294 qpair failed and we were unable to recover it. 00:34:22.294 [2024-05-15 16:55:29.320095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.320213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.320247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.320263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.320276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.320304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.330123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.330250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.330277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.330292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.330305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.330333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.340172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.340298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.340324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.340339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.340352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.340380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.350140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.350255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.350284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.350299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.350312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.350340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.360211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.360337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.360363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.360377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.360390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.360418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.370245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.370360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.370390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.370406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.370418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.370446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.380287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.380406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.380432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.380451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.380464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.380493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.390299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.390428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.390455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.390470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.390482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.390510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.400295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.400411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.400437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.400452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.400464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.400491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.410318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.410429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.410455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.410469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.410481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.410515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.420367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.420490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.420516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.420530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.420543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.420570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.430416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.430552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.430578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.430592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.430604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.430632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.440402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.440512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.440538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.440552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.440564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.440592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.295 [2024-05-15 16:55:29.450424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.295 [2024-05-15 16:55:29.450537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.295 [2024-05-15 16:55:29.450562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.295 [2024-05-15 16:55:29.450577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.295 [2024-05-15 16:55:29.450589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.295 [2024-05-15 16:55:29.450617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.295 qpair failed and we were unable to recover it. 00:34:22.296 [2024-05-15 16:55:29.460500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.296 [2024-05-15 16:55:29.460637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.296 [2024-05-15 16:55:29.460668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.296 [2024-05-15 16:55:29.460683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.296 [2024-05-15 16:55:29.460696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.296 [2024-05-15 16:55:29.460723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.296 qpair failed and we were unable to recover it. 00:34:22.296 [2024-05-15 16:55:29.470510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.296 [2024-05-15 16:55:29.470617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.296 [2024-05-15 16:55:29.470642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.296 [2024-05-15 16:55:29.470656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.296 [2024-05-15 16:55:29.470669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.296 [2024-05-15 16:55:29.470696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.296 qpair failed and we were unable to recover it. 00:34:22.296 [2024-05-15 16:55:29.480541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.296 [2024-05-15 16:55:29.480657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.296 [2024-05-15 16:55:29.480683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.296 [2024-05-15 16:55:29.480698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.296 [2024-05-15 16:55:29.480710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.296 [2024-05-15 16:55:29.480737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.296 qpair failed and we were unable to recover it. 00:34:22.296 [2024-05-15 16:55:29.490577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.296 [2024-05-15 16:55:29.490692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.296 [2024-05-15 16:55:29.490717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.296 [2024-05-15 16:55:29.490731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.296 [2024-05-15 16:55:29.490743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.296 [2024-05-15 16:55:29.490770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.296 qpair failed and we were unable to recover it. 00:34:22.296 [2024-05-15 16:55:29.500616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.296 [2024-05-15 16:55:29.500735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.296 [2024-05-15 16:55:29.500760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.296 [2024-05-15 16:55:29.500774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.296 [2024-05-15 16:55:29.500787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.296 [2024-05-15 16:55:29.500819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.296 qpair failed and we were unable to recover it. 00:34:22.296 [2024-05-15 16:55:29.510651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.296 [2024-05-15 16:55:29.510768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.296 [2024-05-15 16:55:29.510794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.296 [2024-05-15 16:55:29.510808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.296 [2024-05-15 16:55:29.510820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.296 [2024-05-15 16:55:29.510848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.296 qpair failed and we were unable to recover it. 00:34:22.554 [2024-05-15 16:55:29.520671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.554 [2024-05-15 16:55:29.520802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.554 [2024-05-15 16:55:29.520827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.554 [2024-05-15 16:55:29.520845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.554 [2024-05-15 16:55:29.520858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.554 [2024-05-15 16:55:29.520886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.530706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.530816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.530842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.530856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.530869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.530896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.540799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.540938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.540964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.540978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.540990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.541018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.550719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.550837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.550868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.550884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.550896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.550923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.560755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.560869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.560894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.560909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.560922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.560950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.570799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.570910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.570936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.570951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.570963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.570991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.580817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.580969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.580995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.581009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.581021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.581049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.590833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.590961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.590987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.591002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.591014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.591047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.600922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.601045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.601071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.601085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.601098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.601125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.610891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.611001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.611027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.611041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.611053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.611081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.621052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.621224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.621250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.621265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.621277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.621305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.630956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.631081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.631108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.631123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.631135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.631163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.641006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.641142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.641173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.641188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.641200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.641235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.651003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.651114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.555 [2024-05-15 16:55:29.651140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.555 [2024-05-15 16:55:29.651155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.555 [2024-05-15 16:55:29.651167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.555 [2024-05-15 16:55:29.651195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.555 qpair failed and we were unable to recover it. 00:34:22.555 [2024-05-15 16:55:29.661072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.555 [2024-05-15 16:55:29.661191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.661222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.661238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.661250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.661278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.671057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.671175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.671200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.671221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.671235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.671263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.681099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.681287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.681314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.681329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.681350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.681381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.691116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.691237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.691263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.691278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.691290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.691317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.701164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.701333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.701360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.701374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.701386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.701413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.711179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.711310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.711335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.711349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.711361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.711389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.721208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.721339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.721365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.721379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.721391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.721418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.731263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.731422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.731447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.731461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.731473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.731500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.741300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.741463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.741488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.741502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.741515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.741542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.751311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.751437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.751462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.751476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.751488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.751515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.761368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.761484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.761509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.761523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.761536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.761563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.556 [2024-05-15 16:55:29.771391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.556 [2024-05-15 16:55:29.771542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.556 [2024-05-15 16:55:29.771568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.556 [2024-05-15 16:55:29.771581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.556 [2024-05-15 16:55:29.771599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.556 [2024-05-15 16:55:29.771627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.556 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.781410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.781537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.781563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.781578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.781590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.781620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.791478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.791644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.791669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.791684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.791696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.791723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.801466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.801589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.801614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.801628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.801640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.801667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.811486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.811601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.811626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.811640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.811652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.811680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.821537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.821664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.821690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.821704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.821716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.821744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.831625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.831741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.831767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.831781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.831793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.831821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.841606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.841736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.841761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.841775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.841787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.841815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.851623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.851745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.851771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.851795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.851808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.851835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.861636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.861771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.861796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.861817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.861831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.861859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.871713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.871835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.871860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.871875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.871888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.871916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.881671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.881791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.881817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.881833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.881845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.881874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.891719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.891844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.891868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.891883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.891895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.891923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.901729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.901852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.901878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.901893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.901906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2047570 00:34:22.815 [2024-05-15 16:55:29.901933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.911810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.911948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.911980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.911997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.912010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.912041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.921841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.921996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.922024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.922039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.922052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.922083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.931848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.932008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.932036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.932052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.932065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.932095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.941910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.942051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.942078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.942094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.942106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.942136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.951890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.952019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.952047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.952070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.952083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.952113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.961953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.962077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.962104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.962120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.962132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.962162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.971990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.972118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.972145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.972160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.972173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.972203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-05-15 16:55:29.981968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.815 [2024-05-15 16:55:29.982087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.815 [2024-05-15 16:55:29.982114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.815 [2024-05-15 16:55:29.982129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.815 [2024-05-15 16:55:29.982142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.815 [2024-05-15 16:55:29.982172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.816 [2024-05-15 16:55:29.992008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.816 [2024-05-15 16:55:29.992132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.816 [2024-05-15 16:55:29.992159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.816 [2024-05-15 16:55:29.992177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.816 [2024-05-15 16:55:29.992191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.816 [2024-05-15 16:55:29.992240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-05-15 16:55:30.002101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.816 [2024-05-15 16:55:30.002236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.816 [2024-05-15 16:55:30.002264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.816 [2024-05-15 16:55:30.002280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.816 [2024-05-15 16:55:30.002292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.816 [2024-05-15 16:55:30.002322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-05-15 16:55:30.012071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.816 [2024-05-15 16:55:30.012190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.816 [2024-05-15 16:55:30.012228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.816 [2024-05-15 16:55:30.012247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.816 [2024-05-15 16:55:30.012266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.816 [2024-05-15 16:55:30.012298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-05-15 16:55:30.022259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.816 [2024-05-15 16:55:30.022397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.816 [2024-05-15 16:55:30.022427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.816 [2024-05-15 16:55:30.022447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.816 [2024-05-15 16:55:30.022461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.816 [2024-05-15 16:55:30.022494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-05-15 16:55:30.032202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.816 [2024-05-15 16:55:30.032348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.816 [2024-05-15 16:55:30.032374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.816 [2024-05-15 16:55:30.032389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.816 [2024-05-15 16:55:30.032402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:22.816 [2024-05-15 16:55:30.032433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.816 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.042208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.042343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.042379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.042395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.042408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.042439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.052182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.052308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.052335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.052350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.052362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.052405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.062222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.062343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.062369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.062383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.062396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.062425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.072232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.072348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.072373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.072388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.072401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.072442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.082303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.082427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.082456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.082475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.082489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.082551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.092276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.092395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.092421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.092436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.092449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.092480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.102317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.102436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.102461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.102477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.102489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.102519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.112349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.112487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.112514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.112529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.112542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.112571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.122382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.122498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.122524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.122539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.122552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.122595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.132398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.132521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.132552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.132568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.132581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.132611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.142442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.142562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.142588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.142603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.142616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.142661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.152544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.152662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.152687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.152701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.074 [2024-05-15 16:55:30.152714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.074 [2024-05-15 16:55:30.152745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.074 qpair failed and we were unable to recover it. 00:34:23.074 [2024-05-15 16:55:30.162516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.074 [2024-05-15 16:55:30.162662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.074 [2024-05-15 16:55:30.162688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.074 [2024-05-15 16:55:30.162702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.162715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.162745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.172520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.172653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.172678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.172693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.172706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.172742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.182675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.182800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.182825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.182840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.182853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.182883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.192551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.192669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.192694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.192709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.192723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.192752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.202595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.202710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.202735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.202750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.202764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.202805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.212610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.212722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.212747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.212762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.212775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.212805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.222713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.222866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.222892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.222907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.222920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.222962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.232674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.232792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.232818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.232833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.232846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.232876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.242699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.242845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.242870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.242885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.242898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.242928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.252740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.252863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.252888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.252903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.252917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.252947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.262844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.262996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.263020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.263035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.263054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.263085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.272819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.272939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.272964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.272979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.272992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.273022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.282859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.282982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.283007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.283022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.283035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.283065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.075 [2024-05-15 16:55:30.292863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.075 [2024-05-15 16:55:30.292998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.075 [2024-05-15 16:55:30.293026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.075 [2024-05-15 16:55:30.293043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.075 [2024-05-15 16:55:30.293056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.075 [2024-05-15 16:55:30.293086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.075 qpair failed and we were unable to recover it. 00:34:23.334 [2024-05-15 16:55:30.302874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.334 [2024-05-15 16:55:30.303032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.334 [2024-05-15 16:55:30.303058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.334 [2024-05-15 16:55:30.303073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.334 [2024-05-15 16:55:30.303086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.334 [2024-05-15 16:55:30.303116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-05-15 16:55:30.312930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.334 [2024-05-15 16:55:30.313050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.334 [2024-05-15 16:55:30.313077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.334 [2024-05-15 16:55:30.313095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.334 [2024-05-15 16:55:30.313110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.334 [2024-05-15 16:55:30.313142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-05-15 16:55:30.322928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.334 [2024-05-15 16:55:30.323055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.334 [2024-05-15 16:55:30.323081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.334 [2024-05-15 16:55:30.323097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.334 [2024-05-15 16:55:30.323110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.334 [2024-05-15 16:55:30.323140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-05-15 16:55:30.332967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.334 [2024-05-15 16:55:30.333091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.334 [2024-05-15 16:55:30.333116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.334 [2024-05-15 16:55:30.333131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.334 [2024-05-15 16:55:30.333144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.334 [2024-05-15 16:55:30.333174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-05-15 16:55:30.343021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.343146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.343172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.343187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.343201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.343239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.353028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.353152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.353177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.353198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.353212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.353252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.363056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.363183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.363207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.363230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.363244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.363274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.373105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.373252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.373284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.373300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.373313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.373343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.383132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.383260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.383286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.383301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.383314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.383346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.393154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.393288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.393321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.393335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.393348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.393378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.403201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.403331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.403356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.403371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.403383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.403413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.413191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.413317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.413345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.413361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.413374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.413404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.423265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.423403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.423431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.423450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.423463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.423494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.433267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.433401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.433430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.433446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.433463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.433495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.443336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.443500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.443532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.443548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.443561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.443591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.453357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.453496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.453523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.453538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.453551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.453582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.463389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.463516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.463544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.463559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.335 [2024-05-15 16:55:30.463572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.335 [2024-05-15 16:55:30.463614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-05-15 16:55:30.473393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.335 [2024-05-15 16:55:30.473506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.335 [2024-05-15 16:55:30.473531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.335 [2024-05-15 16:55:30.473546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.473559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.473588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.483422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.483534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.483559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.483573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.483586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.483615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.493443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.493577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.493604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.493619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.493641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.493671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.503470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.503593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.503618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.503633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.503646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.503682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.513487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.513607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.513632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.513646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.513659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.513689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.523545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.523666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.523693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.523709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.523723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.523753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.533556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.533691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.533723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.533741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.533755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.533785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.543563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.543680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.543705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.543720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.543733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.543763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-05-15 16:55:30.553585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.336 [2024-05-15 16:55:30.553706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.336 [2024-05-15 16:55:30.553731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.336 [2024-05-15 16:55:30.553747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.336 [2024-05-15 16:55:30.553760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.336 [2024-05-15 16:55:30.553790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.563654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.596 [2024-05-15 16:55:30.563772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.596 [2024-05-15 16:55:30.563798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.596 [2024-05-15 16:55:30.563813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.596 [2024-05-15 16:55:30.563826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.596 [2024-05-15 16:55:30.563856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.596 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.573677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.596 [2024-05-15 16:55:30.573806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.596 [2024-05-15 16:55:30.573831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.596 [2024-05-15 16:55:30.573846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.596 [2024-05-15 16:55:30.573860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.596 [2024-05-15 16:55:30.573895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.596 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.583727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.596 [2024-05-15 16:55:30.583849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.596 [2024-05-15 16:55:30.583874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.596 [2024-05-15 16:55:30.583890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.596 [2024-05-15 16:55:30.583903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.596 [2024-05-15 16:55:30.583936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.596 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.593708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.596 [2024-05-15 16:55:30.593823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.596 [2024-05-15 16:55:30.593849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.596 [2024-05-15 16:55:30.593865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.596 [2024-05-15 16:55:30.593879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.596 [2024-05-15 16:55:30.593910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.596 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.603739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.596 [2024-05-15 16:55:30.603855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.596 [2024-05-15 16:55:30.603881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.596 [2024-05-15 16:55:30.603896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.596 [2024-05-15 16:55:30.603909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.596 [2024-05-15 16:55:30.603941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.596 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.613828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.596 [2024-05-15 16:55:30.613943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.596 [2024-05-15 16:55:30.613968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.596 [2024-05-15 16:55:30.613989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.596 [2024-05-15 16:55:30.614001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.596 [2024-05-15 16:55:30.614031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.596 qpair failed and we were unable to recover it. 00:34:23.596 [2024-05-15 16:55:30.623805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.623931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.623964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.623980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.623993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.624035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.633814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.633936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.633961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.633976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.633989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.634019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.643856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.643967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.643992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.644007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.644020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.644051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.653880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.654017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.654041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.654056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.654070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.654099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.663939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.664105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.664130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.664145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.664164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.664195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.673969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.674088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.674114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.674128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.674142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.674174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.683995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.684126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.684153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.684169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.684186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.684226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.693993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.694109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.694137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.694153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.694166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.694197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.704032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.704150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.704178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.704193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.704206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.704244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.714057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.714187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.714222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.714240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.714253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.714283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.724086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.724240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.724267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.724282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.724295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.724338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.734105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.734224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.734260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.734275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.734288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.734317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.744186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.744316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.744343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.744358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.744370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.744400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.754175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.597 [2024-05-15 16:55:30.754304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.597 [2024-05-15 16:55:30.754337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.597 [2024-05-15 16:55:30.754357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.597 [2024-05-15 16:55:30.754371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.597 [2024-05-15 16:55:30.754402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.597 qpair failed and we were unable to recover it. 00:34:23.597 [2024-05-15 16:55:30.764192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.598 [2024-05-15 16:55:30.764331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.598 [2024-05-15 16:55:30.764356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.598 [2024-05-15 16:55:30.764371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.598 [2024-05-15 16:55:30.764384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.598 [2024-05-15 16:55:30.764413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.598 qpair failed and we were unable to recover it. 00:34:23.598 [2024-05-15 16:55:30.774213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.598 [2024-05-15 16:55:30.774329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.598 [2024-05-15 16:55:30.774355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.598 [2024-05-15 16:55:30.774369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.598 [2024-05-15 16:55:30.774383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.598 [2024-05-15 16:55:30.774413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.598 qpair failed and we were unable to recover it. 00:34:23.598 [2024-05-15 16:55:30.784314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.598 [2024-05-15 16:55:30.784437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.598 [2024-05-15 16:55:30.784463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.598 [2024-05-15 16:55:30.784479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.598 [2024-05-15 16:55:30.784492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.598 [2024-05-15 16:55:30.784523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.598 qpair failed and we were unable to recover it. 00:34:23.598 [2024-05-15 16:55:30.794283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.598 [2024-05-15 16:55:30.794433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.598 [2024-05-15 16:55:30.794460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.598 [2024-05-15 16:55:30.794475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.598 [2024-05-15 16:55:30.794488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.598 [2024-05-15 16:55:30.794518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.598 qpair failed and we were unable to recover it. 00:34:23.598 [2024-05-15 16:55:30.804296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.598 [2024-05-15 16:55:30.804446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.598 [2024-05-15 16:55:30.804473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.598 [2024-05-15 16:55:30.804488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.598 [2024-05-15 16:55:30.804501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.598 [2024-05-15 16:55:30.804531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.598 qpair failed and we were unable to recover it. 00:34:23.598 [2024-05-15 16:55:30.814322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.598 [2024-05-15 16:55:30.814467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.598 [2024-05-15 16:55:30.814492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.598 [2024-05-15 16:55:30.814507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.598 [2024-05-15 16:55:30.814520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.598 [2024-05-15 16:55:30.814550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.598 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.824413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.824530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.824555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.824570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.824583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.824612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.834408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.834529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.834554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.834569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.834582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.834611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.844452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.844584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.844611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.844633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.844650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.844681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.854460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.854582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.854608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.854623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.854636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.854678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.864553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.864672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.864698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.864715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.864728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.864758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.874519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.874652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.874678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.874693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.874706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.874736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.884620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.884746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.884771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.884786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.884799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.884829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.894563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.894677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.894702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.894717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.894730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.894760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.904672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.857 [2024-05-15 16:55:30.904830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.857 [2024-05-15 16:55:30.904855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.857 [2024-05-15 16:55:30.904870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.857 [2024-05-15 16:55:30.904883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.857 [2024-05-15 16:55:30.904912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-05-15 16:55:30.914626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.914786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.914811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.914826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.914839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.914868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.924638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.924804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.924829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.924844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.924856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.924887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.934699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.934825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.934855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.934871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.934884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.934914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.944703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.944820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.944845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.944860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.944874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.944904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.954712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.954827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.954851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.954866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.954880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.954909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.964735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.964849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.964874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.964888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.964901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.964930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.974773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.974882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.974906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.974921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.974935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.974970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.984849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.984966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.984991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.985005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.985019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.985049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:30.994822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.858 [2024-05-15 16:55:30.994942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.858 [2024-05-15 16:55:30.994967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.858 [2024-05-15 16:55:30.994981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.858 [2024-05-15 16:55:30.994995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.858 [2024-05-15 16:55:30.995025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-05-15 16:55:31.004878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.004997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.005025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.005040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.005053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.005096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.014896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.015018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.015045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.015060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.015073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.015103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.024948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.025077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.025107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.025123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.025136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.025166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.034979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.035125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.035153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.035168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.035181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.035212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.044985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.045151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.045179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.045194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.045208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.045247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.055006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.055122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.055147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.055161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.055175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.055204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.065050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.065169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.065194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.065209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.065240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.065281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-05-15 16:55:31.075074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.859 [2024-05-15 16:55:31.075245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.859 [2024-05-15 16:55:31.075273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.859 [2024-05-15 16:55:31.075288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.859 [2024-05-15 16:55:31.075301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:23.859 [2024-05-15 16:55:31.075333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.859 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.085221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.085367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.085394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.085409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.085421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.085453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.095161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.095285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.095311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.095326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.095339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.095369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.105265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.105422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.105448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.105463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.105475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.105505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.115231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.115365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.115392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.115407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.115420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.115452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.125205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.125383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.125410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.125425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.125438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.125468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.135274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.135390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.135416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.135432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.135445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.135475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.145302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.145473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.145501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.145516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.145532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.145574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.155325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.155457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.155485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.155506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.155520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.155550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.165332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.165451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.165478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.165493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.165506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.165536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.118 qpair failed and we were unable to recover it. 00:34:24.118 [2024-05-15 16:55:31.175359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.118 [2024-05-15 16:55:31.175479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.118 [2024-05-15 16:55:31.175506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.118 [2024-05-15 16:55:31.175521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.118 [2024-05-15 16:55:31.175535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.118 [2024-05-15 16:55:31.175564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.185423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.185547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.185574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.185589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.185602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.185644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.195452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.195580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.195606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.195622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.195635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.195676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.205462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.205597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.205624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.205639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.205652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.205681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.215479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.215605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.215631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.215647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.215659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.215689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.225489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.225607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.225634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.225649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.225661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.225691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.235602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.235761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.235787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.235803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.235815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.235845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.245543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.245652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.245677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.245698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.245711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.245741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.255588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.255704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.255730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.255745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.255758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.255790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.265629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.265767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.265794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.265809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.265822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.265852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.275632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.275751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.275777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.275792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.275804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.275834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.285708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.285825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.285851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.285866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.285879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.285908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.295679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.295791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.295817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.295832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.295845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.295875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.305739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.305861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.305887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.305902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.305915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.119 [2024-05-15 16:55:31.305944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.119 qpair failed and we were unable to recover it. 00:34:24.119 [2024-05-15 16:55:31.315738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.119 [2024-05-15 16:55:31.315856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.119 [2024-05-15 16:55:31.315882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.119 [2024-05-15 16:55:31.315897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.119 [2024-05-15 16:55:31.315910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.120 [2024-05-15 16:55:31.315940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.120 qpair failed and we were unable to recover it. 00:34:24.120 [2024-05-15 16:55:31.325827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.120 [2024-05-15 16:55:31.325950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.120 [2024-05-15 16:55:31.325977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.120 [2024-05-15 16:55:31.325993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.120 [2024-05-15 16:55:31.326005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.120 [2024-05-15 16:55:31.326048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.120 qpair failed and we were unable to recover it. 00:34:24.120 [2024-05-15 16:55:31.335807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.120 [2024-05-15 16:55:31.335924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.120 [2024-05-15 16:55:31.335957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.120 [2024-05-15 16:55:31.335973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.120 [2024-05-15 16:55:31.335986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.120 [2024-05-15 16:55:31.336016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.120 qpair failed and we were unable to recover it. 00:34:24.378 [2024-05-15 16:55:31.345845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.378 [2024-05-15 16:55:31.345964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.378 [2024-05-15 16:55:31.345988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.378 [2024-05-15 16:55:31.346002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.378 [2024-05-15 16:55:31.346016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.378 [2024-05-15 16:55:31.346046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-05-15 16:55:31.355869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.378 [2024-05-15 16:55:31.355985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.378 [2024-05-15 16:55:31.356012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.378 [2024-05-15 16:55:31.356027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.378 [2024-05-15 16:55:31.356040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.378 [2024-05-15 16:55:31.356070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-05-15 16:55:31.365909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.378 [2024-05-15 16:55:31.366021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.378 [2024-05-15 16:55:31.366047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.378 [2024-05-15 16:55:31.366062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.378 [2024-05-15 16:55:31.366075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.378 [2024-05-15 16:55:31.366105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.375972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.376099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.376126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.376141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.376154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.376192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.386013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.386180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.386206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.386230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.386244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.386275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.396029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.396150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.396178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.396195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.396211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.396252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.406046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.406163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.406190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.406206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.406226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.406269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.416069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.416181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.416208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.416231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.416245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.416275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.426165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.426327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.426359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.426375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.426388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.426419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.436148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.436281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.436309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.436325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.436338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.436368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.446172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.446301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.446328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.446343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.446355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.446386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.456238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.456353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.456380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.456395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.456408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.456438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.466245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.466365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.466392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.466407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.466425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.466456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.476253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.476364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.476390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.476405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.476418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.476448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.486309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.486432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.486459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.486474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.486490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.486520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.496316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.379 [2024-05-15 16:55:31.496451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.379 [2024-05-15 16:55:31.496479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.379 [2024-05-15 16:55:31.496498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.379 [2024-05-15 16:55:31.496511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.379 [2024-05-15 16:55:31.496542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-05-15 16:55:31.506391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.506513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.506540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.506558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.506571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.506601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.516357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.516479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.516507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.516522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.516535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.516565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.526417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.526543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.526571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.526586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.526599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.526642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.536424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.536541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.536568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.536583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.536596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.536627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.546489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.546657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.546684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.546699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.546712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.546743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.556477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.556614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.556642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.556658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.556681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.556713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.566526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.566655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.566682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.566697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.566710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.566740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.576539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.576668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.576695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.576710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.576723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.576752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.586580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.586702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.586728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.586743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.586756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.586786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-05-15 16:55:31.596592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.380 [2024-05-15 16:55:31.596718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.380 [2024-05-15 16:55:31.596743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.380 [2024-05-15 16:55:31.596758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.380 [2024-05-15 16:55:31.596771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.380 [2024-05-15 16:55:31.596801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.638 [2024-05-15 16:55:31.606622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.638 [2024-05-15 16:55:31.606732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.638 [2024-05-15 16:55:31.606759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.638 [2024-05-15 16:55:31.606774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.638 [2024-05-15 16:55:31.606787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.638 [2024-05-15 16:55:31.606817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.638 qpair failed and we were unable to recover it. 00:34:24.638 [2024-05-15 16:55:31.616679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.638 [2024-05-15 16:55:31.616799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.638 [2024-05-15 16:55:31.616826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.638 [2024-05-15 16:55:31.616840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.638 [2024-05-15 16:55:31.616854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.638 [2024-05-15 16:55:31.616884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.638 qpair failed and we were unable to recover it. 00:34:24.638 [2024-05-15 16:55:31.626676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.638 [2024-05-15 16:55:31.626795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.638 [2024-05-15 16:55:31.626822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.626837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.626849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.626879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.636718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.636835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.636861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.636876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.636888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.636918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.646764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.646895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.646923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.646947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.646961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.646993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.656747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.656863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.656890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.656905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.656917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.656948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.666875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.666994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.667021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.667036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.667049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.667078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.676812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.676928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.676956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.676971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.676984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.677025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.686840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.686966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.686993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.687008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.687021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.687051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.696871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.696987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.697015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.697033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.697048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.697079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.706895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.707012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.707039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.707054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.707067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.707097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.716938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.717056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.717083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.717098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.717111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.717141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.726952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.727067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.727094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.727109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.727121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.727151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.736982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.737114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.737146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.737162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.737174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.639 [2024-05-15 16:55:31.737204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.639 qpair failed and we were unable to recover it. 00:34:24.639 [2024-05-15 16:55:31.747084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.639 [2024-05-15 16:55:31.747234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.639 [2024-05-15 16:55:31.747261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.639 [2024-05-15 16:55:31.747276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.639 [2024-05-15 16:55:31.747288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.747318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.757162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.757300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.757327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.757342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.757355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.757385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.767067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.767182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.767210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.767232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.767246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.767278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.777145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.777269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.777296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.777312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.777325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.777361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.787140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.787311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.787338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.787353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.787365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.787395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.797168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.797289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.797316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.797331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.797344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.797385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.807166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.807331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.807358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.807373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.807386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.807417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.817186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.817305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.817332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.817347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.817360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.817391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.827288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.827430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.827462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.827478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.827491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.827521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.837278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.837409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.837436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.837451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.837464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.837495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.847313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.847434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.847459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.847474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.847488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.847517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.640 [2024-05-15 16:55:31.857336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.640 [2024-05-15 16:55:31.857459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.640 [2024-05-15 16:55:31.857486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.640 [2024-05-15 16:55:31.857501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.640 [2024-05-15 16:55:31.857514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.640 [2024-05-15 16:55:31.857544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.640 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.867344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.867469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.899 [2024-05-15 16:55:31.867496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.899 [2024-05-15 16:55:31.867511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.899 [2024-05-15 16:55:31.867525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.899 [2024-05-15 16:55:31.867560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.899 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.877373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.877504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.899 [2024-05-15 16:55:31.877531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.899 [2024-05-15 16:55:31.877547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.899 [2024-05-15 16:55:31.877568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.899 [2024-05-15 16:55:31.877597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.899 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.887428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.887553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.899 [2024-05-15 16:55:31.887580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.899 [2024-05-15 16:55:31.887595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.899 [2024-05-15 16:55:31.887608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.899 [2024-05-15 16:55:31.887638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.899 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.897430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.897565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.899 [2024-05-15 16:55:31.897589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.899 [2024-05-15 16:55:31.897604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.899 [2024-05-15 16:55:31.897617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.899 [2024-05-15 16:55:31.897646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.899 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.907465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.907594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.899 [2024-05-15 16:55:31.907621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.899 [2024-05-15 16:55:31.907636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.899 [2024-05-15 16:55:31.907648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.899 [2024-05-15 16:55:31.907678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.899 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.917521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.917654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.899 [2024-05-15 16:55:31.917682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.899 [2024-05-15 16:55:31.917697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.899 [2024-05-15 16:55:31.917710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.899 [2024-05-15 16:55:31.917752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.899 qpair failed and we were unable to recover it. 00:34:24.899 [2024-05-15 16:55:31.927502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.899 [2024-05-15 16:55:31.927632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.927659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.927674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.927686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.927716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.937577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.937744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.937771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.937786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.937799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.937829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.947572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.947743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.947770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.947785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.947798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.947840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.957598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.957716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.957743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.957758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.957776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.957807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.967642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.967763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.967791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.967806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.967819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.967851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.977688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.977813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.977840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.977855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.977868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.977909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.987741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.987876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.987914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.987929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.987942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.987972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:31.997746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:31.997907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:31.997933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:31.997948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:31.997960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:31.997990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.007728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:32.007848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:32.007875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:32.007890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:32.007902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:32.007932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.017758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:32.017924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:32.017951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:32.017966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:32.017979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:32.018009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.027799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:32.027930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:32.027956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:32.027971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:32.027983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:32.028013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.037855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:32.037990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:32.038016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:32.038031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:32.038043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:32.038073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.047904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:32.048027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:32.048055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:32.048075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:32.048089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:32.048131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.057897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.900 [2024-05-15 16:55:32.058009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.900 [2024-05-15 16:55:32.058036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.900 [2024-05-15 16:55:32.058052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.900 [2024-05-15 16:55:32.058065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.900 [2024-05-15 16:55:32.058098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.900 qpair failed and we were unable to recover it. 00:34:24.900 [2024-05-15 16:55:32.067963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.901 [2024-05-15 16:55:32.068080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.901 [2024-05-15 16:55:32.068107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.901 [2024-05-15 16:55:32.068122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.901 [2024-05-15 16:55:32.068135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.901 [2024-05-15 16:55:32.068177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.901 qpair failed and we were unable to recover it. 00:34:24.901 [2024-05-15 16:55:32.078005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.901 [2024-05-15 16:55:32.078130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.901 [2024-05-15 16:55:32.078156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.901 [2024-05-15 16:55:32.078171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.901 [2024-05-15 16:55:32.078183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.901 [2024-05-15 16:55:32.078213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.901 qpair failed and we were unable to recover it. 00:34:24.901 [2024-05-15 16:55:32.088006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.901 [2024-05-15 16:55:32.088139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.901 [2024-05-15 16:55:32.088165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.901 [2024-05-15 16:55:32.088180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.901 [2024-05-15 16:55:32.088193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.901 [2024-05-15 16:55:32.088230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.901 qpair failed and we were unable to recover it. 00:34:24.901 [2024-05-15 16:55:32.098037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.901 [2024-05-15 16:55:32.098161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.901 [2024-05-15 16:55:32.098185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.901 [2024-05-15 16:55:32.098200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.901 [2024-05-15 16:55:32.098224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.901 [2024-05-15 16:55:32.098282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.901 qpair failed and we were unable to recover it. 00:34:24.901 [2024-05-15 16:55:32.108036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.901 [2024-05-15 16:55:32.108165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.901 [2024-05-15 16:55:32.108192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.901 [2024-05-15 16:55:32.108207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.901 [2024-05-15 16:55:32.108229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.901 [2024-05-15 16:55:32.108261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.901 qpair failed and we were unable to recover it. 00:34:24.901 [2024-05-15 16:55:32.118080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.901 [2024-05-15 16:55:32.118238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.901 [2024-05-15 16:55:32.118266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.901 [2024-05-15 16:55:32.118281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.901 [2024-05-15 16:55:32.118294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:24.901 [2024-05-15 16:55:32.118325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.901 qpair failed and we were unable to recover it. 00:34:25.159 [2024-05-15 16:55:32.128107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.159 [2024-05-15 16:55:32.128266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.159 [2024-05-15 16:55:32.128294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.159 [2024-05-15 16:55:32.128309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.159 [2024-05-15 16:55:32.128321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.159 [2024-05-15 16:55:32.128367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.159 qpair failed and we were unable to recover it. 00:34:25.159 [2024-05-15 16:55:32.138117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.159 [2024-05-15 16:55:32.138278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.159 [2024-05-15 16:55:32.138313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.159 [2024-05-15 16:55:32.138330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.159 [2024-05-15 16:55:32.138343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.159 [2024-05-15 16:55:32.138373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.159 qpair failed and we were unable to recover it. 00:34:25.159 [2024-05-15 16:55:32.148186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.159 [2024-05-15 16:55:32.148362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.159 [2024-05-15 16:55:32.148390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.159 [2024-05-15 16:55:32.148406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.159 [2024-05-15 16:55:32.148419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.159 [2024-05-15 16:55:32.148452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.159 qpair failed and we were unable to recover it. 00:34:25.159 [2024-05-15 16:55:32.158223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.159 [2024-05-15 16:55:32.158362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.159 [2024-05-15 16:55:32.158390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.159 [2024-05-15 16:55:32.158406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.159 [2024-05-15 16:55:32.158423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.159 [2024-05-15 16:55:32.158456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.159 qpair failed and we were unable to recover it. 00:34:25.159 [2024-05-15 16:55:32.168236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.159 [2024-05-15 16:55:32.168365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.159 [2024-05-15 16:55:32.168393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.159 [2024-05-15 16:55:32.168408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.159 [2024-05-15 16:55:32.168421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.159 [2024-05-15 16:55:32.168451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.159 qpair failed and we were unable to recover it. 00:34:25.159 [2024-05-15 16:55:32.178234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.178354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.178381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.178396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.178409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.178439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.188293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.188463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.188489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.188504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.188517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.188547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.198315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.198440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.198467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.198482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.198495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.198525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.208320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.208439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.208465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.208480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.208493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.208523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.218353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.218478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.218505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.218520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.218534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.218563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.228383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.228505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.228536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.228552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.228565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.228595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.238392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.238507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.238542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.238557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.238571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.238600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.248454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.248616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.248642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.248657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.248669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.248712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.258462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.258574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.258599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.258614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.258627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.258656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.268498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.268617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.268644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.268658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.268672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.268708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.278507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.278634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.278659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.278675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.278687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.278717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.288574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.288690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.160 [2024-05-15 16:55:32.288715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.160 [2024-05-15 16:55:32.288729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.160 [2024-05-15 16:55:32.288742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.160 [2024-05-15 16:55:32.288772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.160 qpair failed and we were unable to recover it. 00:34:25.160 [2024-05-15 16:55:32.298579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.160 [2024-05-15 16:55:32.298707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.298732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.298747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.298760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.298789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.308660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.308807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.308833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.308848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.308861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.308890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.318622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.318741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.318773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.318789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.318802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.318834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.328665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.328791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.328817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.328832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.328845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.328875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.338678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.338790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.338816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.338830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.338844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.338874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.348725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.348845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.348870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.348885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.348898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.348928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.358728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.358844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.358869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.358884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.358903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.358934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.368757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.368871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.368896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.368911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.368924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.368954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.161 [2024-05-15 16:55:32.378859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.161 [2024-05-15 16:55:32.378986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.161 [2024-05-15 16:55:32.379012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.161 [2024-05-15 16:55:32.379028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.161 [2024-05-15 16:55:32.379043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.161 [2024-05-15 16:55:32.379073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.161 qpair failed and we were unable to recover it. 00:34:25.420 [2024-05-15 16:55:32.388821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.420 [2024-05-15 16:55:32.388942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.420 [2024-05-15 16:55:32.388966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.420 [2024-05-15 16:55:32.388981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.420 [2024-05-15 16:55:32.388994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.420 [2024-05-15 16:55:32.389024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.420 qpair failed and we were unable to recover it. 00:34:25.420 [2024-05-15 16:55:32.398890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.420 [2024-05-15 16:55:32.399037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.420 [2024-05-15 16:55:32.399061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.420 [2024-05-15 16:55:32.399076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.420 [2024-05-15 16:55:32.399089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.420 [2024-05-15 16:55:32.399118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.420 qpair failed and we were unable to recover it. 00:34:25.420 [2024-05-15 16:55:32.408888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.409016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.409041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.409055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.409068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.409098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.418895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.419024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.419048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.419063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.419076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.419106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.428939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.429063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.429087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.429101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.429115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.429145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.438967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.439125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.439150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.439165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.439178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.439229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.449021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.449158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.449185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.449208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.449231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.449264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.459010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.459121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.459146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.459161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.459174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.459205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.469050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.469170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.469196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.469211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.469231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.469262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.479073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.479192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.479223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.479240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.479254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.479284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.489108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.489247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.489273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.489288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.489301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.489331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.499137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.499251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.499277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.499292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.499305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.499335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.509142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.509268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.509293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.509307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.509321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.509351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.519204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.519327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.421 [2024-05-15 16:55:32.519351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.421 [2024-05-15 16:55:32.519365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.421 [2024-05-15 16:55:32.519379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.421 [2024-05-15 16:55:32.519409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.421 qpair failed and we were unable to recover it. 00:34:25.421 [2024-05-15 16:55:32.529209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.421 [2024-05-15 16:55:32.529335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.529359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.529374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.529387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.529417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.539228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.539340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.539364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.539385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.539399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.539429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.549294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.549412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.549437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.549452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.549465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.549507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.559284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.559404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.559429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.559443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.559456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.559486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.569305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.569419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.569444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.569459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.569472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.569502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.579353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.579467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.579491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.579506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.579519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.579549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.589387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.589511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.589537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.589552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.589565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.589595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.599409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.599523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.599548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.599563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.599576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.599606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.609444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.609554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.609579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.609593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.609607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.609636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.619468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.619581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.619606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.619621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.619634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.619676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.629507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.629635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.629666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.629682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.629695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.629737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.422 [2024-05-15 16:55:32.639526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.422 [2024-05-15 16:55:32.639641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.422 [2024-05-15 16:55:32.639666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.422 [2024-05-15 16:55:32.639681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.422 [2024-05-15 16:55:32.639694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.422 [2024-05-15 16:55:32.639724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.422 qpair failed and we were unable to recover it. 00:34:25.681 [2024-05-15 16:55:32.649543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.681 [2024-05-15 16:55:32.649654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.681 [2024-05-15 16:55:32.649679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.681 [2024-05-15 16:55:32.649694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.681 [2024-05-15 16:55:32.649708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.681 [2024-05-15 16:55:32.649738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.681 qpair failed and we were unable to recover it. 00:34:25.681 [2024-05-15 16:55:32.659570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.681 [2024-05-15 16:55:32.659683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.681 [2024-05-15 16:55:32.659708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.681 [2024-05-15 16:55:32.659723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.681 [2024-05-15 16:55:32.659736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.681 [2024-05-15 16:55:32.659765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.681 qpair failed and we were unable to recover it. 00:34:25.681 [2024-05-15 16:55:32.669630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.681 [2024-05-15 16:55:32.669798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.681 [2024-05-15 16:55:32.669825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.681 [2024-05-15 16:55:32.669840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.681 [2024-05-15 16:55:32.669853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.681 [2024-05-15 16:55:32.669889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.679663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.679801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.679830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.679845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.679861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.679892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.689664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.689785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.689813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.689834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.689847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.689889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.699807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.699928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.699956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.699971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.699987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.700016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.709753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.709875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.709901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.709916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.709929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.709958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.719788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.719900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.719931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.719947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.719961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.720003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.729764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.729874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.729899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.729913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.729925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.729955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.739848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.739961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.739987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.740001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.740015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.740047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.749860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.749983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.750010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.750025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.750039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.750068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.759842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.759955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.759979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.759994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.760013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.760043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.769885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.769996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.770022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.770037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.770050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.770079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.779898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.780060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.780087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.780102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.780115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.780145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.789949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.790067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.790094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.790115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.790128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.790158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.799984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.682 [2024-05-15 16:55:32.800104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.682 [2024-05-15 16:55:32.800132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.682 [2024-05-15 16:55:32.800150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.682 [2024-05-15 16:55:32.800163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.682 [2024-05-15 16:55:32.800205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.682 qpair failed and we were unable to recover it. 00:34:25.682 [2024-05-15 16:55:32.809998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.810128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.810154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.810169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.810182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.810212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.820014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.820128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.820154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.820170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.820183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.820213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.830059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.830176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.830201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.830224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.830239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.830280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.840097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.840264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.840289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.840303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.840315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.840345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.850108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.850224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.850249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.850283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.850297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.850326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.860166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.860334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.860360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.860375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.860388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.860430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.870184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.870310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.870335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.870349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.870362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.870392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.880207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.880344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.880369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.880384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.880397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.880439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.890285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.890444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.890470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.890484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.890511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.890542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.683 [2024-05-15 16:55:32.900281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.683 [2024-05-15 16:55:32.900403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.683 [2024-05-15 16:55:32.900429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.683 [2024-05-15 16:55:32.900443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.683 [2024-05-15 16:55:32.900457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.683 [2024-05-15 16:55:32.900487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.683 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.910373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.910505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.910530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.910545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.910557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.910587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.920333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.920472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.920497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.920512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.920525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.920555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.930354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.930473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.930498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.930512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.930525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.930556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.940378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.940508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.940536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.940557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.940571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.940602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.950495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.950618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.950643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.950658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.950671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.950700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.960431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.960548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.960573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.960588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.960601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.960631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.970470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.970635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.970660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.970675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.970688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.970720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.980498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.980621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.980647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.980662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.980675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.980706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:32.990513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:32.990635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:32.990662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:32.990678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.942 [2024-05-15 16:55:32.990691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.942 [2024-05-15 16:55:32.990721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.942 qpair failed and we were unable to recover it. 00:34:25.942 [2024-05-15 16:55:33.000545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.942 [2024-05-15 16:55:33.000707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.942 [2024-05-15 16:55:33.000734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.942 [2024-05-15 16:55:33.000750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.000763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.000792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.010548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.010659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.010685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.010701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.010713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.010743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.020592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.020707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.020733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.020749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.020762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.020794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.030629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.030747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.030777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.030793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.030806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.030848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.040642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.040752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.040777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.040791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.040804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.040834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.050775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.050899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.050924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.050939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.050952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.050982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.060693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.060800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.060825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.060839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.060852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.060882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.070780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.070953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.070980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.070995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.071008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.071044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.080758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.080883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.080910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.080926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.080938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.080969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.090796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.090904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.090931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.090947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.090960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.090990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.100854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.100968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.100993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.101008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.101021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.101064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.110945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.111082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.111107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.111122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.111135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.111164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.120903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.121025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.121057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.121073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.121086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.121127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.130944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.131078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.131103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.943 [2024-05-15 16:55:33.131118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.943 [2024-05-15 16:55:33.131130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.943 [2024-05-15 16:55:33.131160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.943 qpair failed and we were unable to recover it. 00:34:25.943 [2024-05-15 16:55:33.140936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.943 [2024-05-15 16:55:33.141057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.943 [2024-05-15 16:55:33.141085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.944 [2024-05-15 16:55:33.141101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.944 [2024-05-15 16:55:33.141117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.944 [2024-05-15 16:55:33.141148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.944 qpair failed and we were unable to recover it. 00:34:25.944 [2024-05-15 16:55:33.150970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.944 [2024-05-15 16:55:33.151089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.944 [2024-05-15 16:55:33.151114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.944 [2024-05-15 16:55:33.151129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.944 [2024-05-15 16:55:33.151142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.944 [2024-05-15 16:55:33.151171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.944 qpair failed and we were unable to recover it. 00:34:25.944 [2024-05-15 16:55:33.160979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.944 [2024-05-15 16:55:33.161097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.944 [2024-05-15 16:55:33.161122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.944 [2024-05-15 16:55:33.161137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.944 [2024-05-15 16:55:33.161156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:25.944 [2024-05-15 16:55:33.161187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.944 qpair failed and we were unable to recover it. 00:34:26.202 [2024-05-15 16:55:33.171027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.202 [2024-05-15 16:55:33.171150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.202 [2024-05-15 16:55:33.171175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.202 [2024-05-15 16:55:33.171190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.202 [2024-05-15 16:55:33.171203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.202 [2024-05-15 16:55:33.171241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-05-15 16:55:33.181039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.202 [2024-05-15 16:55:33.181154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.202 [2024-05-15 16:55:33.181179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.202 [2024-05-15 16:55:33.181193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.202 [2024-05-15 16:55:33.181206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.202 [2024-05-15 16:55:33.181243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-05-15 16:55:33.191092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.202 [2024-05-15 16:55:33.191220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.202 [2024-05-15 16:55:33.191247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.202 [2024-05-15 16:55:33.191264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.202 [2024-05-15 16:55:33.191278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.202 [2024-05-15 16:55:33.191308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-05-15 16:55:33.201110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.202 [2024-05-15 16:55:33.201233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.202 [2024-05-15 16:55:33.201259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.202 [2024-05-15 16:55:33.201273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.202 [2024-05-15 16:55:33.201286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.201317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.211135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.211266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.211291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.211306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.211319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.211349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.221140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.221261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.221286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.221301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.221315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.221345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.231234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.231388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.231415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.231430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.231443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.231484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.241248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.241370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.241397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.241412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.241425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.241456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.251251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.251364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.251392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.251407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.251425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.251467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.261302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.261418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.261444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.261459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.261472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.261501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.271319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.271446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.271473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.271488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.271501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.271530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.281354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.281481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.281508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.281523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.281536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.281578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.291355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.291469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.291497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.291512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.291524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.291554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.301414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.301535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.301562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.301578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.301591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.301623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.311476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.311595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.311622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.311637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.311649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.311691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.321465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.321584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.321611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.321626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.321639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.321669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.331482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.331601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.331628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.203 [2024-05-15 16:55:33.331643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.203 [2024-05-15 16:55:33.331656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.203 [2024-05-15 16:55:33.331686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-05-15 16:55:33.341540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.203 [2024-05-15 16:55:33.341660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.203 [2024-05-15 16:55:33.341688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.341708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.341722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.341752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.351585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.351751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.351778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.351794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.351806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.351836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.361560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.361677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.361704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.361719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.361732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.361774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.371583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.371692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.371720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.371736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.371748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.371777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.381602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.381718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.381745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.381760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.381773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.381803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.391640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.391760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.391786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.391801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.391814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.391844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.401681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.401803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.401830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.401845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.401858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.401887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.411703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.411846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.411872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.411888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.411901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.411930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-05-15 16:55:33.421763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.204 [2024-05-15 16:55:33.421897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.204 [2024-05-15 16:55:33.421925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.204 [2024-05-15 16:55:33.421941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.204 [2024-05-15 16:55:33.421954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.204 [2024-05-15 16:55:33.421985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.431767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.431888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.431921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.463 [2024-05-15 16:55:33.431937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.463 [2024-05-15 16:55:33.431950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.463 [2024-05-15 16:55:33.431980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.463 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.441780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.441895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.441922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.463 [2024-05-15 16:55:33.441936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.463 [2024-05-15 16:55:33.441949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.463 [2024-05-15 16:55:33.441979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.463 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.451806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.451918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.451945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.463 [2024-05-15 16:55:33.451960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.463 [2024-05-15 16:55:33.451973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.463 [2024-05-15 16:55:33.452003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.463 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.461824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.461934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.461961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.463 [2024-05-15 16:55:33.461976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.463 [2024-05-15 16:55:33.461989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.463 [2024-05-15 16:55:33.462018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.463 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.471868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.472028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.472055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.463 [2024-05-15 16:55:33.472070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.463 [2024-05-15 16:55:33.472083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.463 [2024-05-15 16:55:33.472118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.463 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.481905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.482020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.482046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.463 [2024-05-15 16:55:33.482061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.463 [2024-05-15 16:55:33.482074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.463 [2024-05-15 16:55:33.482104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.463 qpair failed and we were unable to recover it. 00:34:26.463 [2024-05-15 16:55:33.491909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.463 [2024-05-15 16:55:33.492021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.463 [2024-05-15 16:55:33.492047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.492062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.492075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.492104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.501973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.502092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.502121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.502140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.502153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.502184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.511980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.512098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.512125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.512140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.512153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.512183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.521992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.522107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.522138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.522155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.522167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.522197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.532047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.532179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.532206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.532229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.532243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.532274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.542056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.542170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.542196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.542211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.542232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.542263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.552126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.552282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.552309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.552324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.552336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.552378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.562127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.562283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.562311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.562327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.562343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.562384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.572133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.572252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.572279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.572295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.572308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.572338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.582190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.582336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.582362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.582377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.582390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.582420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.592273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.592398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.592424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.592439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.592452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.592482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.602214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.602334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.602358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.602373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.602386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.602416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.612262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.612384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.612411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.612426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.612439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.612468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.622294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.622456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.464 [2024-05-15 16:55:33.622483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.464 [2024-05-15 16:55:33.622498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.464 [2024-05-15 16:55:33.622511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.464 [2024-05-15 16:55:33.622541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.464 qpair failed and we were unable to recover it. 00:34:26.464 [2024-05-15 16:55:33.632349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.464 [2024-05-15 16:55:33.632470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.465 [2024-05-15 16:55:33.632496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.465 [2024-05-15 16:55:33.632512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.465 [2024-05-15 16:55:33.632525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.465 [2024-05-15 16:55:33.632566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.465 qpair failed and we were unable to recover it. 00:34:26.465 [2024-05-15 16:55:33.642370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.465 [2024-05-15 16:55:33.642492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.465 [2024-05-15 16:55:33.642518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.465 [2024-05-15 16:55:33.642533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.465 [2024-05-15 16:55:33.642546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.465 [2024-05-15 16:55:33.642576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.465 qpair failed and we were unable to recover it. 00:34:26.465 [2024-05-15 16:55:33.652389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.465 [2024-05-15 16:55:33.652505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.465 [2024-05-15 16:55:33.652532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.465 [2024-05-15 16:55:33.652547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.465 [2024-05-15 16:55:33.652565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.465 [2024-05-15 16:55:33.652595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.465 qpair failed and we were unable to recover it. 00:34:26.465 [2024-05-15 16:55:33.662408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.465 [2024-05-15 16:55:33.662525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.465 [2024-05-15 16:55:33.662552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.465 [2024-05-15 16:55:33.662567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.465 [2024-05-15 16:55:33.662580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.465 [2024-05-15 16:55:33.662610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.465 qpair failed and we were unable to recover it. 00:34:26.465 [2024-05-15 16:55:33.672472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.465 [2024-05-15 16:55:33.672595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.465 [2024-05-15 16:55:33.672621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.465 [2024-05-15 16:55:33.672636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.465 [2024-05-15 16:55:33.672649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.465 [2024-05-15 16:55:33.672679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.465 qpair failed and we were unable to recover it. 00:34:26.465 [2024-05-15 16:55:33.682467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.465 [2024-05-15 16:55:33.682620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.465 [2024-05-15 16:55:33.682647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.465 [2024-05-15 16:55:33.682662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.465 [2024-05-15 16:55:33.682675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.465 [2024-05-15 16:55:33.682717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.465 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.692481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.692595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.692621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.692637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.692649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.692679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.702501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.702628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.702655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.702670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.702683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.702713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.712559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.712679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.712705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.712720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.712733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.712763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.722602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.722768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.722795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.722810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.722822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.722854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.732591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.732702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.732729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.732744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.732757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.732787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.742671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.742811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.742840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.742862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.742879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.742910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.752665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.752795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.752821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.752836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.752849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.752879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.762666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.762779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.762805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.762820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.762833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.762863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.772708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.772864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.772892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.772907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.772920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.772949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.782760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.782891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.782918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.782934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.782947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.782977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.792759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.792875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.792901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.792916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.792930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.792959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.802808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.802927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.802954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.802969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.802982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.803012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.812875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.813007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.813034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.813050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.813063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.813107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.822874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.822985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.823012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.823027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.823040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.823082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.832928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.833099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.833131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.833149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.833161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.833203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.842915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.843031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.843055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.843069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.843082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.843111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.852996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.853109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.853134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.853149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.853162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.853191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.863059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.863183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.863211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.863236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.863250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.863281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.873012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.873131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.873157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.723 [2024-05-15 16:55:33.873172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.723 [2024-05-15 16:55:33.873185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.723 [2024-05-15 16:55:33.873229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.723 qpair failed and we were unable to recover it. 00:34:26.723 [2024-05-15 16:55:33.883083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.723 [2024-05-15 16:55:33.883207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.723 [2024-05-15 16:55:33.883244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.883260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.883273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.883304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.724 [2024-05-15 16:55:33.893081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.724 [2024-05-15 16:55:33.893200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.724 [2024-05-15 16:55:33.893235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.893254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.893267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.893309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.724 [2024-05-15 16:55:33.903112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.724 [2024-05-15 16:55:33.903231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.724 [2024-05-15 16:55:33.903257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.903271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.903283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.903326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.724 [2024-05-15 16:55:33.913243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.724 [2024-05-15 16:55:33.913366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.724 [2024-05-15 16:55:33.913393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.913408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.913421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.913451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.724 [2024-05-15 16:55:33.923172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.724 [2024-05-15 16:55:33.923301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.724 [2024-05-15 16:55:33.923333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.923349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.923362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.923392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.724 [2024-05-15 16:55:33.933194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.724 [2024-05-15 16:55:33.933346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.724 [2024-05-15 16:55:33.933373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.933388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.933401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.933433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.724 [2024-05-15 16:55:33.943229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.724 [2024-05-15 16:55:33.943348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.724 [2024-05-15 16:55:33.943375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.724 [2024-05-15 16:55:33.943390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.724 [2024-05-15 16:55:33.943403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.724 [2024-05-15 16:55:33.943433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.724 qpair failed and we were unable to recover it. 00:34:26.981 [2024-05-15 16:55:33.953284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.981 [2024-05-15 16:55:33.953410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.981 [2024-05-15 16:55:33.953437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.981 [2024-05-15 16:55:33.953452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.981 [2024-05-15 16:55:33.953468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:33.953510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:33.963306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:33.963427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:33.963453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:33.963468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:33.963482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:33.963517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:33.973296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:33.973405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:33.973431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:33.973446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:33.973459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:33.973489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:33.983347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:33.983462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:33.983489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:33.983504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:33.983517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:33.983547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:33.993400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:33.993532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:33.993558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:33.993573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:33.993585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:33.993615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:34.003442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:34.003577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:34.003606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:34.003622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:34.003635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:34.003665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:34.013435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:34.013545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:34.013578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:34.013594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:34.013606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:34.013636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:34.023485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:34.023602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:34.023629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:34.023645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:34.023657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:34.023687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:34.033534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-05-15 16:55:34.033661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-05-15 16:55:34.033687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-05-15 16:55:34.033702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-05-15 16:55:34.033715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6f44000b90 00:34:26.982 [2024-05-15 16:55:34.033745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-05-15 16:55:34.033870] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:26.982 A controller has encountered a failure and is being reset. 00:34:26.982 Controller properly reset. 00:34:26.982 Initializing NVMe Controllers 00:34:26.982 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:26.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:26.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:26.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:26.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:26.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:26.982 Initialization complete. Launching workers. 00:34:26.982 Starting thread on core 1 00:34:26.982 Starting thread on core 2 00:34:26.982 Starting thread on core 3 00:34:26.982 Starting thread on core 0 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:26.982 00:34:26.982 real 0m10.737s 00:34:26.982 user 0m18.128s 00:34:26.982 sys 0m5.271s 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.982 ************************************ 00:34:26.982 END TEST nvmf_target_disconnect_tc2 00:34:26.982 ************************************ 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:26.982 rmmod nvme_tcp 00:34:26.982 rmmod nvme_fabrics 00:34:26.982 rmmod nvme_keyring 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1941329 ']' 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1941329 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1941329 ']' 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1941329 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:26.982 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1941329 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1941329' 00:34:27.241 killing process with pid 1941329 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1941329 00:34:27.241 [2024-05-15 16:55:34.218838] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1941329 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.241 16:55:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.809 16:55:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.809 00:34:29.809 real 0m16.031s 00:34:29.809 user 0m44.146s 00:34:29.809 sys 0m7.648s 00:34:29.809 16:55:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.809 16:55:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 ************************************ 00:34:29.810 END TEST nvmf_target_disconnect 00:34:29.810 ************************************ 00:34:29.810 16:55:36 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:34:29.810 16:55:36 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.810 16:55:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 16:55:36 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:29.810 00:34:29.810 real 27m0.120s 00:34:29.810 user 72m44.944s 00:34:29.810 sys 6m33.561s 00:34:29.810 16:55:36 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.810 16:55:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 ************************************ 00:34:29.810 END TEST nvmf_tcp 00:34:29.810 ************************************ 00:34:29.810 16:55:36 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:34:29.810 16:55:36 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.810 16:55:36 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:29.810 16:55:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:29.810 16:55:36 -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 ************************************ 00:34:29.810 START TEST spdkcli_nvmf_tcp 00:34:29.810 ************************************ 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.810 * Looking for test storage... 00:34:29.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1942524 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1942524 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1942524 ']' 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 [2024-05-15 16:55:36.694389] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:34:29.810 [2024-05-15 16:55:36.694471] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942524 ] 00:34:29.810 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.810 [2024-05-15 16:55:36.764026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:29.810 [2024-05-15 16:55:36.850962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.810 [2024-05-15 16:55:36.850967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 16:55:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:29.810 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:29.810 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:29.810 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:29.810 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:29.810 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:29.810 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:29.810 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:29.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:29.811 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:29.811 ' 00:34:32.336 [2024-05-15 16:55:39.524548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.706 [2024-05-15 16:55:40.748390] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:33.706 [2024-05-15 16:55:40.749054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:36.229 [2024-05-15 16:55:43.036013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:38.125 [2024-05-15 16:55:44.998370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:39.496 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:39.496 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:39.496 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:39.496 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:39.496 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:39.496 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:39.496 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:39.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:39.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:39.496 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:39.496 16:55:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.061 16:55:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:40.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:40.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:40.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:40.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:40.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:40.061 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:40.061 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:40.061 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:40.061 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:40.061 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:40.061 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:40.061 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:40.061 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:40.061 ' 00:34:45.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:45.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:45.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:45.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:45.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:45.320 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:45.320 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.320 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:45.320 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:45.320 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:45.320 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:45.320 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:45.320 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1942524 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1942524 ']' 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1942524 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1942524 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1942524' 00:34:45.320 killing process with pid 1942524 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1942524 00:34:45.320 [2024-05-15 16:55:52.411312] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:45.320 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1942524 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1942524 ']' 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1942524 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1942524 ']' 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1942524 00:34:45.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1942524) - No such process 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1942524 is not found' 00:34:45.579 Process with pid 1942524 is not found 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:45.579 00:34:45.579 real 0m16.057s 00:34:45.579 user 0m33.964s 00:34:45.579 sys 0m0.794s 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:45.579 16:55:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.579 ************************************ 00:34:45.579 END TEST spdkcli_nvmf_tcp 00:34:45.579 ************************************ 00:34:45.579 16:55:52 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.579 16:55:52 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:45.579 16:55:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:45.579 16:55:52 -- common/autotest_common.sh@10 -- # set +x 00:34:45.579 ************************************ 00:34:45.579 START TEST nvmf_identify_passthru 00:34:45.579 ************************************ 00:34:45.579 16:55:52 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.579 * Looking for test storage... 00:34:45.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.579 16:55:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.579 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.580 16:55:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.580 16:55:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.580 16:55:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.580 16:55:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.580 16:55:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.580 16:55:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.580 16:55:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.580 16:55:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.580 16:55:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.580 16:55:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:45.580 16:55:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:45.580 16:55:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:45.580 16:55:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:48.188 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:48.188 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.188 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:48.188 Found net devices under 0000:09:00.0: cvl_0_0 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:48.189 Found net devices under 0000:09:00.1: cvl_0_1 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:48.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:34:48.189 00:34:48.189 --- 10.0.0.2 ping statistics --- 00:34:48.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.189 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:34:48.189 00:34:48.189 --- 10.0.0.1 ping statistics --- 00:34:48.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.189 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:48.189 16:55:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:48.189 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.189 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:48.189 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:48.448 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:48.448 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:0b:00.0 00:34:48.448 16:55:55 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:0b:00.0 00:34:48.448 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:48.448 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:48.448 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:48.448 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:48.448 16:55:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:48.448 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.629 16:55:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:52.629 16:55:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:52.629 16:55:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:52.629 16:55:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:52.629 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1947337 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1947337 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1947337 ']' 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.807 [2024-05-15 16:56:03.775440] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:34:56.807 [2024-05-15 16:56:03.775545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.807 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.807 [2024-05-15 16:56:03.859852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.807 [2024-05-15 16:56:03.946305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.807 [2024-05-15 16:56:03.946367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.807 [2024-05-15 16:56:03.946393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.807 [2024-05-15 16:56:03.946406] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.807 [2024-05-15 16:56:03.946418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.807 [2024-05-15 16:56:03.946525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.807 [2024-05-15 16:56:03.946601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.807 [2024-05-15 16:56:03.946693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.807 [2024-05-15 16:56:03.946695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.807 INFO: Log level set to 20 00:34:56.807 INFO: Requests: 00:34:56.807 { 00:34:56.807 "jsonrpc": "2.0", 00:34:56.807 "method": "nvmf_set_config", 00:34:56.807 "id": 1, 00:34:56.807 "params": { 00:34:56.807 "admin_cmd_passthru": { 00:34:56.807 "identify_ctrlr": true 00:34:56.807 } 00:34:56.807 } 00:34:56.807 } 00:34:56.807 00:34:56.807 INFO: response: 00:34:56.807 { 00:34:56.807 "jsonrpc": "2.0", 00:34:56.807 "id": 1, 00:34:56.807 "result": true 00:34:56.807 } 00:34:56.807 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.807 16:56:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.807 16:56:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.807 INFO: Setting log level to 20 00:34:56.807 INFO: Setting log level to 20 00:34:56.807 INFO: Log level set to 20 00:34:56.807 INFO: Log level set to 20 00:34:56.807 INFO: Requests: 00:34:56.807 { 00:34:56.807 "jsonrpc": "2.0", 00:34:56.808 "method": "framework_start_init", 00:34:56.808 "id": 1 00:34:56.808 } 00:34:56.808 00:34:56.808 INFO: Requests: 00:34:56.808 { 00:34:56.808 "jsonrpc": "2.0", 00:34:56.808 "method": "framework_start_init", 00:34:56.808 "id": 1 00:34:56.808 } 00:34:56.808 00:34:57.065 [2024-05-15 16:56:04.091406] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:57.065 INFO: response: 00:34:57.065 { 00:34:57.065 "jsonrpc": "2.0", 00:34:57.065 "id": 1, 00:34:57.065 "result": true 00:34:57.065 } 00:34:57.065 00:34:57.065 INFO: response: 00:34:57.065 { 00:34:57.065 "jsonrpc": "2.0", 00:34:57.065 "id": 1, 00:34:57.065 "result": true 00:34:57.065 } 00:34:57.065 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.065 16:56:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.065 INFO: Setting log level to 40 00:34:57.065 INFO: Setting log level to 40 00:34:57.065 INFO: Setting log level to 40 00:34:57.065 [2024-05-15 16:56:04.101368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.065 16:56:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.065 16:56:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.065 16:56:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.341 Nvme0n1 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.341 16:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.341 16:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.341 16:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.341 [2024-05-15 16:56:06.994126] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:00.341 [2024-05-15 16:56:06.994469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.341 16:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.341 16:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.341 [ 00:35:00.341 { 00:35:00.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:00.341 "subtype": "Discovery", 00:35:00.341 "listen_addresses": [], 00:35:00.341 "allow_any_host": true, 00:35:00.341 "hosts": [] 00:35:00.341 }, 00:35:00.341 { 00:35:00.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:00.341 "subtype": "NVMe", 00:35:00.341 "listen_addresses": [ 00:35:00.341 { 00:35:00.341 "trtype": "TCP", 00:35:00.341 "adrfam": "IPv4", 00:35:00.341 "traddr": "10.0.0.2", 00:35:00.341 "trsvcid": "4420" 00:35:00.341 } 00:35:00.341 ], 00:35:00.341 "allow_any_host": true, 00:35:00.341 "hosts": [], 00:35:00.341 "serial_number": "SPDK00000000000001", 00:35:00.341 "model_number": "SPDK bdev Controller", 00:35:00.341 "max_namespaces": 1, 00:35:00.341 "min_cntlid": 1, 00:35:00.341 "max_cntlid": 65519, 00:35:00.341 "namespaces": [ 00:35:00.341 { 00:35:00.341 "nsid": 1, 00:35:00.341 "bdev_name": "Nvme0n1", 00:35:00.341 "name": "Nvme0n1", 00:35:00.341 "nguid": "D1806623177F4D90A05EE310E77DDC3C", 00:35:00.341 "uuid": "d1806623-177f-4d90-a05e-e310e77ddc3c" 00:35:00.341 } 00:35:00.341 ] 00:35:00.341 } 00:35:00.341 ] 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:00.341 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:00.341 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:00.341 16:56:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:00.341 rmmod nvme_tcp 00:35:00.341 rmmod nvme_fabrics 00:35:00.341 rmmod nvme_keyring 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1947337 ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1947337 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1947337 ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1947337 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1947337 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:00.341 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1947337' 00:35:00.342 killing process with pid 1947337 00:35:00.342 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1947337 00:35:00.342 [2024-05-15 16:56:07.511388] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:00.342 16:56:07 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1947337 00:35:02.242 16:56:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:02.242 16:56:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:02.242 16:56:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:02.242 16:56:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:02.242 16:56:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:02.242 16:56:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.242 16:56:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.242 16:56:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.142 16:56:11 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.142 00:35:04.142 real 0m18.311s 00:35:04.142 user 0m26.574s 00:35:04.142 sys 0m2.619s 00:35:04.142 16:56:11 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:04.142 16:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.142 ************************************ 00:35:04.142 END TEST nvmf_identify_passthru 00:35:04.142 ************************************ 00:35:04.142 16:56:11 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:04.142 16:56:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:04.142 16:56:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:04.142 16:56:11 -- common/autotest_common.sh@10 -- # set +x 00:35:04.142 ************************************ 00:35:04.142 START TEST nvmf_dif 00:35:04.142 ************************************ 00:35:04.142 16:56:11 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:04.142 * Looking for test storage... 00:35:04.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.142 16:56:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.142 16:56:11 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.142 16:56:11 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.142 16:56:11 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.142 16:56:11 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.142 16:56:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.142 16:56:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.142 16:56:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.142 16:56:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:04.143 16:56:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.143 16:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:04.143 16:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:04.143 16:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:04.143 16:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:04.143 16:56:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.143 16:56:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.143 16:56:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.143 16:56:11 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.143 16:56:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:06.670 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:06.670 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.670 16:56:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:06.671 Found net devices under 0000:09:00.0: cvl_0_0 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:06.671 Found net devices under 0000:09:00.1: cvl_0_1 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:06.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:35:06.671 00:35:06.671 --- 10.0.0.2 ping statistics --- 00:35:06.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.671 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:35:06.671 00:35:06.671 --- 10.0.0.1 ping statistics --- 00:35:06.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.671 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:06.671 16:56:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:08.046 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:08.046 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:08.046 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:08.046 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:08.046 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:08.046 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:08.046 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:08.046 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:08.046 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:08.046 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:08.046 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:08.046 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:08.046 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:08.046 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:08.046 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:08.046 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:08.046 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:08.046 16:56:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.046 16:56:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:08.046 16:56:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:08.046 16:56:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.046 16:56:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:08.046 16:56:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:08.046 16:56:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:08.046 16:56:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:08.047 16:56:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.047 16:56:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1951011 00:35:08.047 16:56:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:08.047 16:56:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1951011 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1951011 ']' 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:08.047 16:56:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.047 [2024-05-15 16:56:15.248633] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:35:08.047 [2024-05-15 16:56:15.248712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.344 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.344 [2024-05-15 16:56:15.327343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.344 [2024-05-15 16:56:15.406370] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.344 [2024-05-15 16:56:15.406423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.344 [2024-05-15 16:56:15.406447] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.344 [2024-05-15 16:56:15.406458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.344 [2024-05-15 16:56:15.406469] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.344 [2024-05-15 16:56:15.406494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.344 16:56:15 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:08.344 16:56:15 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:08.344 16:56:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:08.344 16:56:15 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.345 16:56:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.345 16:56:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:08.345 16:56:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.345 [2024-05-15 16:56:15.537945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.345 16:56:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:08.345 16:56:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.609 ************************************ 00:35:08.609 START TEST fio_dif_1_default 00:35:08.609 ************************************ 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.609 bdev_null0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.609 [2024-05-15 16:56:15.598050] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:08.609 [2024-05-15 16:56:15.598329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.609 { 00:35:08.609 "params": { 00:35:08.609 "name": "Nvme$subsystem", 00:35:08.609 "trtype": "$TEST_TRANSPORT", 00:35:08.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.609 "adrfam": "ipv4", 00:35:08.609 "trsvcid": "$NVMF_PORT", 00:35:08.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.609 "hdgst": ${hdgst:-false}, 00:35:08.609 "ddgst": ${ddgst:-false} 00:35:08.609 }, 00:35:08.609 "method": "bdev_nvme_attach_controller" 00:35:08.609 } 00:35:08.609 EOF 00:35:08.609 )") 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:08.609 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:08.610 "params": { 00:35:08.610 "name": "Nvme0", 00:35:08.610 "trtype": "tcp", 00:35:08.610 "traddr": "10.0.0.2", 00:35:08.610 "adrfam": "ipv4", 00:35:08.610 "trsvcid": "4420", 00:35:08.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.610 "hdgst": false, 00:35:08.610 "ddgst": false 00:35:08.610 }, 00:35:08.610 "method": "bdev_nvme_attach_controller" 00:35:08.610 }' 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.610 16:56:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.867 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:08.867 fio-3.35 00:35:08.867 Starting 1 thread 00:35:08.867 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.055 00:35:21.055 filename0: (groupid=0, jobs=1): err= 0: pid=1951194: Wed May 15 16:56:26 2024 00:35:21.055 read: IOPS=141, BW=568KiB/s (581kB/s)(5680KiB/10005msec) 00:35:21.055 slat (nsec): min=6995, max=60934, avg=9723.32, stdev=4624.96 00:35:21.055 clat (usec): min=688, max=47422, avg=28151.29, stdev=18844.77 00:35:21.055 lat (usec): min=695, max=47460, avg=28161.01, stdev=18845.35 00:35:21.055 clat percentiles (usec): 00:35:21.055 | 1.00th=[ 725], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 775], 00:35:21.055 | 30.00th=[ 824], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:21.055 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:21.055 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:35:21.055 | 99.99th=[47449] 00:35:21.055 bw ( KiB/s): min= 384, max= 768, per=99.70%, avg=566.40, stdev=179.85, samples=20 00:35:21.055 iops : min= 96, max= 192, avg=141.60, stdev=44.96, samples=20 00:35:21.055 lat (usec) : 750=12.04%, 1000=20.07% 00:35:21.055 lat (msec) : 50=67.89% 00:35:21.055 cpu : usr=89.65%, sys=10.07%, ctx=17, majf=0, minf=272 00:35:21.055 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.055 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.055 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:21.055 00:35:21.055 Run status group 0 (all jobs): 00:35:21.055 READ: bw=568KiB/s (581kB/s), 568KiB/s-568KiB/s (581kB/s-581kB/s), io=5680KiB (5816kB), run=10005-10005msec 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 00:35:21.055 real 0m11.094s 00:35:21.055 user 0m10.105s 00:35:21.055 sys 0m1.277s 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 ************************************ 00:35:21.055 END TEST fio_dif_1_default 00:35:21.055 ************************************ 00:35:21.055 16:56:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:21.055 16:56:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:21.055 16:56:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 ************************************ 00:35:21.055 START TEST fio_dif_1_multi_subsystems 00:35:21.055 ************************************ 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 bdev_null0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 [2024-05-15 16:56:26.751962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 bdev_null1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.055 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:21.056 { 00:35:21.056 "params": { 00:35:21.056 "name": "Nvme$subsystem", 00:35:21.056 "trtype": "$TEST_TRANSPORT", 00:35:21.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.056 "adrfam": "ipv4", 00:35:21.056 "trsvcid": "$NVMF_PORT", 00:35:21.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.056 "hdgst": ${hdgst:-false}, 00:35:21.056 "ddgst": ${ddgst:-false} 00:35:21.056 }, 00:35:21.056 "method": "bdev_nvme_attach_controller" 00:35:21.056 } 00:35:21.056 EOF 00:35:21.056 )") 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:21.056 { 00:35:21.056 "params": { 00:35:21.056 "name": "Nvme$subsystem", 00:35:21.056 "trtype": "$TEST_TRANSPORT", 00:35:21.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.056 "adrfam": "ipv4", 00:35:21.056 "trsvcid": "$NVMF_PORT", 00:35:21.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.056 "hdgst": ${hdgst:-false}, 00:35:21.056 "ddgst": ${ddgst:-false} 00:35:21.056 }, 00:35:21.056 "method": "bdev_nvme_attach_controller" 00:35:21.056 } 00:35:21.056 EOF 00:35:21.056 )") 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:21.056 "params": { 00:35:21.056 "name": "Nvme0", 00:35:21.056 "trtype": "tcp", 00:35:21.056 "traddr": "10.0.0.2", 00:35:21.056 "adrfam": "ipv4", 00:35:21.056 "trsvcid": "4420", 00:35:21.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.056 "hdgst": false, 00:35:21.056 "ddgst": false 00:35:21.056 }, 00:35:21.056 "method": "bdev_nvme_attach_controller" 00:35:21.056 },{ 00:35:21.056 "params": { 00:35:21.056 "name": "Nvme1", 00:35:21.056 "trtype": "tcp", 00:35:21.056 "traddr": "10.0.0.2", 00:35:21.056 "adrfam": "ipv4", 00:35:21.056 "trsvcid": "4420", 00:35:21.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:21.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:21.056 "hdgst": false, 00:35:21.056 "ddgst": false 00:35:21.056 }, 00:35:21.056 "method": "bdev_nvme_attach_controller" 00:35:21.056 }' 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:21.056 16:56:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.056 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:21.056 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:21.056 fio-3.35 00:35:21.056 Starting 2 threads 00:35:21.056 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.019 00:35:31.019 filename0: (groupid=0, jobs=1): err= 0: pid=1952710: Wed May 15 16:56:37 2024 00:35:31.019 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:35:31.019 slat (nsec): min=5063, max=34248, avg=10497.33, stdev=3905.97 00:35:31.019 clat (usec): min=649, max=43593, avg=21022.03, stdev=20211.17 00:35:31.019 lat (usec): min=658, max=43608, avg=21032.53, stdev=20211.53 00:35:31.019 clat percentiles (usec): 00:35:31.019 | 1.00th=[ 709], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 742], 00:35:31.019 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[40633], 60.00th=[41157], 00:35:31.019 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:31.019 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:35:31.019 | 99.99th=[43779] 00:35:31.019 bw ( KiB/s): min= 704, max= 768, per=57.18%, avg=761.26, stdev=17.13, samples=19 00:35:31.019 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:35:31.020 lat (usec) : 750=22.68%, 1000=25.95% 00:35:31.020 lat (msec) : 2=1.26%, 50=50.11% 00:35:31.020 cpu : usr=94.42%, sys=5.30%, ctx=13, majf=0, minf=128 00:35:31.020 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.020 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.020 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:31.020 filename1: (groupid=0, jobs=1): err= 0: pid=1952711: Wed May 15 16:56:37 2024 00:35:31.020 read: IOPS=142, BW=571KiB/s (585kB/s)(5712KiB/10003msec) 00:35:31.020 slat (nsec): min=6178, max=48616, avg=10002.16, stdev=3026.82 00:35:31.020 clat (usec): min=762, max=43591, avg=27987.27, stdev=18970.22 00:35:31.020 lat (usec): min=780, max=43606, avg=27997.27, stdev=18970.18 00:35:31.020 clat percentiles (usec): 00:35:31.020 | 1.00th=[ 791], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 832], 00:35:31.020 | 30.00th=[ 857], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:31.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:31.020 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:35:31.020 | 99.99th=[43779] 00:35:31.020 bw ( KiB/s): min= 384, max= 768, per=42.46%, avg=565.89, stdev=188.43, samples=19 00:35:31.020 iops : min= 96, max= 192, avg=141.47, stdev=47.11, samples=19 00:35:31.020 lat (usec) : 1000=32.49% 00:35:31.020 lat (msec) : 2=0.28%, 50=67.23% 00:35:31.020 cpu : usr=94.32%, sys=5.39%, ctx=12, majf=0, minf=128 00:35:31.020 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.020 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.020 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:31.020 00:35:31.020 Run status group 0 (all jobs): 00:35:31.020 READ: bw=1331KiB/s (1363kB/s), 571KiB/s-760KiB/s (585kB/s-778kB/s), io=13.0MiB (13.6MB), run=10001-10003msec 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 00:35:31.020 real 0m11.389s 00:35:31.020 user 0m20.211s 00:35:31.020 sys 0m1.397s 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 ************************************ 00:35:31.020 END TEST fio_dif_1_multi_subsystems 00:35:31.020 ************************************ 00:35:31.020 16:56:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:31.020 16:56:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:31.020 16:56:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 ************************************ 00:35:31.020 START TEST fio_dif_rand_params 00:35:31.020 ************************************ 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 bdev_null0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 [2024-05-15 16:56:38.198477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:31.020 { 00:35:31.020 "params": { 00:35:31.020 "name": "Nvme$subsystem", 00:35:31.020 "trtype": "$TEST_TRANSPORT", 00:35:31.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.020 "adrfam": "ipv4", 00:35:31.020 "trsvcid": "$NVMF_PORT", 00:35:31.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.020 "hdgst": ${hdgst:-false}, 00:35:31.020 "ddgst": ${ddgst:-false} 00:35:31.020 }, 00:35:31.020 "method": "bdev_nvme_attach_controller" 00:35:31.020 } 00:35:31.020 EOF 00:35:31.020 )") 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:31.020 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:31.021 "params": { 00:35:31.021 "name": "Nvme0", 00:35:31.021 "trtype": "tcp", 00:35:31.021 "traddr": "10.0.0.2", 00:35:31.021 "adrfam": "ipv4", 00:35:31.021 "trsvcid": "4420", 00:35:31.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:31.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:31.021 "hdgst": false, 00:35:31.021 "ddgst": false 00:35:31.021 }, 00:35:31.021 "method": "bdev_nvme_attach_controller" 00:35:31.021 }' 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:31.021 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:31.278 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:31.278 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:31.278 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:31.278 16:56:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.278 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:31.278 ... 00:35:31.278 fio-3.35 00:35:31.278 Starting 3 threads 00:35:31.278 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.836 00:35:37.836 filename0: (groupid=0, jobs=1): err= 0: pid=1954094: Wed May 15 16:56:44 2024 00:35:37.836 read: IOPS=130, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5019msec) 00:35:37.836 slat (nsec): min=4794, max=41960, avg=13075.92, stdev=3564.42 00:35:37.836 clat (usec): min=7162, max=57043, avg=23038.13, stdev=17663.56 00:35:37.836 lat (usec): min=7174, max=57056, avg=23051.21, stdev=17663.63 00:35:37.836 clat percentiles (usec): 00:35:37.836 | 1.00th=[ 7439], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11600], 00:35:37.836 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13698], 60.00th=[14353], 00:35:37.836 | 70.00th=[15533], 80.00th=[51119], 90.00th=[53216], 95.00th=[54789], 00:35:37.836 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:35:37.836 | 99.99th=[56886] 00:35:37.836 bw ( KiB/s): min=12288, max=19968, per=24.47%, avg=16640.00, stdev=2476.14, samples=10 00:35:37.836 iops : min= 96, max= 156, avg=130.00, stdev=19.34, samples=10 00:35:37.836 lat (msec) : 10=5.82%, 20=68.45%, 50=1.84%, 100=23.89% 00:35:37.836 cpu : usr=90.95%, sys=8.63%, ctx=9, majf=0, minf=47 00:35:37.836 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.836 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.836 filename0: (groupid=0, jobs=1): err= 0: pid=1954096: Wed May 15 16:56:44 2024 00:35:37.836 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(132MiB/5044msec) 00:35:37.836 slat (nsec): min=7589, max=45387, avg=13080.92, stdev=3503.16 00:35:37.836 clat (usec): min=5245, max=90012, avg=14298.33, stdev=9792.17 00:35:37.836 lat (usec): min=5257, max=90024, avg=14311.41, stdev=9792.32 00:35:37.836 clat percentiles (usec): 00:35:37.836 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 8356], 00:35:37.836 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[13566], 00:35:37.836 | 70.00th=[16319], 80.00th=[19006], 90.00th=[22414], 95.00th=[25822], 00:35:37.836 | 99.00th=[55313], 99.50th=[58983], 99.90th=[60556], 99.95th=[89654], 00:35:37.836 | 99.99th=[89654] 00:35:37.836 bw ( KiB/s): min=24064, max=30976, per=39.57%, avg=26910.80, stdev=2343.30, samples=10 00:35:37.836 iops : min= 188, max= 242, avg=210.20, stdev=18.32, samples=10 00:35:37.836 lat (msec) : 10=43.26%, 20=40.32%, 50=13.38%, 100=3.04% 00:35:37.836 cpu : usr=89.97%, sys=9.60%, ctx=13, majf=0, minf=157 00:35:37.836 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.836 issued rwts: total=1054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.836 filename0: (groupid=0, jobs=1): err= 0: pid=1954097: Wed May 15 16:56:44 2024 00:35:37.836 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(122MiB/5004msec) 00:35:37.836 slat (nsec): min=7346, max=35765, avg=12584.01, stdev=3297.18 00:35:37.836 clat (usec): min=4915, max=91477, avg=15409.05, stdev=11472.78 00:35:37.836 lat (usec): min=4927, max=91489, avg=15421.63, stdev=11472.71 00:35:37.836 clat percentiles (usec): 00:35:37.836 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 7701], 20.00th=[ 8717], 00:35:37.836 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[14615], 00:35:37.836 | 70.00th=[16909], 80.00th=[19006], 90.00th=[23462], 95.00th=[49546], 00:35:37.836 | 99.00th=[57934], 99.50th=[61080], 99.90th=[91751], 99.95th=[91751], 00:35:37.836 | 99.99th=[91751] 00:35:37.836 bw ( KiB/s): min=16896, max=33024, per=36.51%, avg=24832.00, stdev=4900.54, samples=10 00:35:37.836 iops : min= 132, max= 258, avg=194.00, stdev=38.29, samples=10 00:35:37.836 lat (msec) : 10=44.91%, 20=37.00%, 50=13.57%, 100=4.52% 00:35:37.836 cpu : usr=90.41%, sys=9.15%, ctx=20, majf=0, minf=105 00:35:37.836 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.836 issued rwts: total=973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.836 00:35:37.836 Run status group 0 (all jobs): 00:35:37.836 READ: bw=66.4MiB/s (69.6MB/s), 16.3MiB/s-26.1MiB/s (17.1MB/s-27.4MB/s), io=335MiB (351MB), run=5004-5044msec 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.836 bdev_null0 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:37.836 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 [2024-05-15 16:56:44.530962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 bdev_null1 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 bdev_null2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.837 { 00:35:37.837 "params": { 00:35:37.837 "name": "Nvme$subsystem", 00:35:37.837 "trtype": "$TEST_TRANSPORT", 00:35:37.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.837 "adrfam": "ipv4", 00:35:37.837 "trsvcid": "$NVMF_PORT", 00:35:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.837 "hdgst": ${hdgst:-false}, 00:35:37.837 "ddgst": ${ddgst:-false} 00:35:37.837 }, 00:35:37.837 "method": "bdev_nvme_attach_controller" 00:35:37.837 } 00:35:37.837 EOF 00:35:37.837 )") 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.837 { 00:35:37.837 "params": { 00:35:37.837 "name": "Nvme$subsystem", 00:35:37.837 "trtype": "$TEST_TRANSPORT", 00:35:37.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.837 "adrfam": "ipv4", 00:35:37.837 "trsvcid": "$NVMF_PORT", 00:35:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.837 "hdgst": ${hdgst:-false}, 00:35:37.837 "ddgst": ${ddgst:-false} 00:35:37.837 }, 00:35:37.837 "method": "bdev_nvme_attach_controller" 00:35:37.837 } 00:35:37.837 EOF 00:35:37.837 )") 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.837 { 00:35:37.837 "params": { 00:35:37.837 "name": "Nvme$subsystem", 00:35:37.837 "trtype": "$TEST_TRANSPORT", 00:35:37.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.837 "adrfam": "ipv4", 00:35:37.837 "trsvcid": "$NVMF_PORT", 00:35:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.837 "hdgst": ${hdgst:-false}, 00:35:37.837 "ddgst": ${ddgst:-false} 00:35:37.837 }, 00:35:37.837 "method": "bdev_nvme_attach_controller" 00:35:37.837 } 00:35:37.837 EOF 00:35:37.837 )") 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:37.837 16:56:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:37.837 "params": { 00:35:37.837 "name": "Nvme0", 00:35:37.837 "trtype": "tcp", 00:35:37.837 "traddr": "10.0.0.2", 00:35:37.837 "adrfam": "ipv4", 00:35:37.837 "trsvcid": "4420", 00:35:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.837 "hdgst": false, 00:35:37.837 "ddgst": false 00:35:37.837 }, 00:35:37.837 "method": "bdev_nvme_attach_controller" 00:35:37.837 },{ 00:35:37.837 "params": { 00:35:37.837 "name": "Nvme1", 00:35:37.837 "trtype": "tcp", 00:35:37.838 "traddr": "10.0.0.2", 00:35:37.838 "adrfam": "ipv4", 00:35:37.838 "trsvcid": "4420", 00:35:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.838 "hdgst": false, 00:35:37.838 "ddgst": false 00:35:37.838 }, 00:35:37.838 "method": "bdev_nvme_attach_controller" 00:35:37.838 },{ 00:35:37.838 "params": { 00:35:37.838 "name": "Nvme2", 00:35:37.838 "trtype": "tcp", 00:35:37.838 "traddr": "10.0.0.2", 00:35:37.838 "adrfam": "ipv4", 00:35:37.838 "trsvcid": "4420", 00:35:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:37.838 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:37.838 "hdgst": false, 00:35:37.838 "ddgst": false 00:35:37.838 }, 00:35:37.838 "method": "bdev_nvme_attach_controller" 00:35:37.838 }' 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:37.838 16:56:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.838 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:37.838 ... 00:35:37.838 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:37.838 ... 00:35:37.838 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:37.838 ... 00:35:37.838 fio-3.35 00:35:37.838 Starting 24 threads 00:35:37.838 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.043 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954857: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=83, BW=333KiB/s (341kB/s)(3392KiB/10195msec) 00:35:50.043 slat (nsec): min=5181, max=98383, avg=13276.72, stdev=11635.48 00:35:50.043 clat (msec): min=3, max=383, avg=191.49, stdev=77.49 00:35:50.043 lat (msec): min=3, max=383, avg=191.50, stdev=77.49 00:35:50.043 clat percentiles (msec): 00:35:50.043 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 85], 20.00th=[ 159], 00:35:50.043 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 215], 00:35:50.043 | 70.00th=[ 234], 80.00th=[ 251], 90.00th=[ 266], 95.00th=[ 300], 00:35:50.043 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:35:50.043 | 99.99th=[ 384] 00:35:50.043 bw ( KiB/s): min= 176, max= 1024, per=5.75%, avg=332.80, stdev=175.47, samples=20 00:35:50.043 iops : min= 44, max= 256, avg=83.20, stdev=43.87, samples=20 00:35:50.043 lat (msec) : 4=1.30%, 10=5.19%, 20=1.06%, 100=3.54%, 250=68.16% 00:35:50.043 lat (msec) : 500=20.75% 00:35:50.043 cpu : usr=98.13%, sys=1.50%, ctx=22, majf=0, minf=38 00:35:50.043 IO depths : 1=0.2%, 2=0.9%, 4=7.8%, 8=78.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:50.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954858: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=53, BW=214KiB/s (219kB/s)(2176KiB/10176msec) 00:35:50.043 slat (usec): min=8, max=110, avg=50.88, stdev=27.87 00:35:50.043 clat (msec): min=138, max=420, avg=298.78, stdev=68.98 00:35:50.043 lat (msec): min=138, max=420, avg=298.83, stdev=69.00 00:35:50.043 clat percentiles (msec): 00:35:50.043 | 1.00th=[ 140], 5.00th=[ 180], 10.00th=[ 232], 20.00th=[ 249], 00:35:50.043 | 30.00th=[ 259], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 309], 00:35:50.043 | 70.00th=[ 347], 80.00th=[ 372], 90.00th=[ 397], 95.00th=[ 409], 00:35:50.043 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:35:50.043 | 99.99th=[ 422] 00:35:50.043 bw ( KiB/s): min= 128, max= 256, per=3.66%, avg=211.20, stdev=62.64, samples=20 00:35:50.043 iops : min= 32, max= 64, avg=52.80, stdev=15.66, samples=20 00:35:50.043 lat (msec) : 250=21.32%, 500=78.68% 00:35:50.043 cpu : usr=98.20%, sys=1.35%, ctx=14, majf=0, minf=20 00:35:50.043 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:50.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954859: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=67, BW=268KiB/s (274kB/s)(2720KiB/10149msec) 00:35:50.043 slat (usec): min=6, max=226, avg=36.10, stdev=35.49 00:35:50.043 clat (msec): min=4, max=430, avg=237.68, stdev=74.82 00:35:50.043 lat (msec): min=4, max=430, avg=237.71, stdev=74.82 00:35:50.043 clat percentiles (msec): 00:35:50.043 | 1.00th=[ 6], 5.00th=[ 52], 10.00th=[ 161], 20.00th=[ 215], 00:35:50.043 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 257], 00:35:50.043 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 296], 95.00th=[ 317], 00:35:50.043 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:35:50.043 | 99.99th=[ 430] 00:35:50.043 bw ( KiB/s): min= 128, max= 641, per=4.59%, avg=265.65, stdev=108.62, samples=20 00:35:50.043 iops : min= 32, max= 160, avg=66.40, stdev=27.11, samples=20 00:35:50.043 lat (msec) : 10=2.35%, 20=2.35%, 100=2.35%, 250=40.88%, 500=52.06% 00:35:50.043 cpu : usr=97.79%, sys=1.42%, ctx=65, majf=0, minf=20 00:35:50.043 IO depths : 1=1.8%, 2=6.9%, 4=21.5%, 8=59.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:50.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954860: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10116msec) 00:35:50.043 slat (nsec): min=4330, max=93662, avg=15798.10, stdev=10712.71 00:35:50.043 clat (msec): min=148, max=538, avg=315.98, stdev=74.16 00:35:50.043 lat (msec): min=148, max=538, avg=316.00, stdev=74.16 00:35:50.043 clat percentiles (msec): 00:35:50.043 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 239], 20.00th=[ 257], 00:35:50.043 | 30.00th=[ 259], 40.00th=[ 284], 50.00th=[ 305], 60.00th=[ 347], 00:35:50.043 | 70.00th=[ 359], 80.00th=[ 384], 90.00th=[ 401], 95.00th=[ 439], 00:35:50.043 | 99.00th=[ 527], 99.50th=[ 527], 99.90th=[ 542], 99.95th=[ 542], 00:35:50.043 | 99.99th=[ 542] 00:35:50.043 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=198.40, stdev=59.28, samples=20 00:35:50.043 iops : min= 32, max= 64, avg=49.60, stdev=14.82, samples=20 00:35:50.043 lat (msec) : 250=17.19%, 500=80.86%, 750=1.95% 00:35:50.043 cpu : usr=98.33%, sys=1.30%, ctx=11, majf=0, minf=22 00:35:50.043 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:50.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954861: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10176msec) 00:35:50.043 slat (usec): min=8, max=172, avg=28.57, stdev=24.85 00:35:50.043 clat (msec): min=138, max=381, avg=260.70, stdev=41.60 00:35:50.043 lat (msec): min=138, max=381, avg=260.73, stdev=41.60 00:35:50.043 clat percentiles (msec): 00:35:50.043 | 1.00th=[ 140], 5.00th=[ 184], 10.00th=[ 232], 20.00th=[ 241], 00:35:50.043 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 266], 00:35:50.043 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 305], 95.00th=[ 355], 00:35:50.043 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:35:50.043 | 99.99th=[ 384] 00:35:50.043 bw ( KiB/s): min= 128, max= 256, per=4.21%, avg=243.20, stdev=36.93, samples=20 00:35:50.043 iops : min= 32, max= 64, avg=60.80, stdev= 9.23, samples=20 00:35:50.043 lat (msec) : 250=36.22%, 500=63.78% 00:35:50.043 cpu : usr=96.60%, sys=2.02%, ctx=180, majf=0, minf=25 00:35:50.043 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:50.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954862: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=58, BW=232KiB/s (238kB/s)(2360KiB/10155msec) 00:35:50.043 slat (nsec): min=8819, max=98692, avg=32282.98, stdev=23132.39 00:35:50.043 clat (msec): min=175, max=482, avg=274.85, stdev=46.38 00:35:50.043 lat (msec): min=175, max=483, avg=274.89, stdev=46.38 00:35:50.043 clat percentiles (msec): 00:35:50.043 | 1.00th=[ 180], 5.00th=[ 213], 10.00th=[ 232], 20.00th=[ 245], 00:35:50.043 | 30.00th=[ 249], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 271], 00:35:50.043 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 342], 95.00th=[ 384], 00:35:50.043 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 485], 99.95th=[ 485], 00:35:50.043 | 99.99th=[ 485] 00:35:50.043 bw ( KiB/s): min= 128, max= 256, per=3.97%, avg=229.60, stdev=48.50, samples=20 00:35:50.043 iops : min= 32, max= 64, avg=57.40, stdev=12.12, samples=20 00:35:50.043 lat (msec) : 250=32.71%, 500=67.29% 00:35:50.043 cpu : usr=97.79%, sys=1.41%, ctx=80, majf=0, minf=19 00:35:50.043 IO depths : 1=2.4%, 2=8.6%, 4=25.1%, 8=53.9%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:50.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.043 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.043 filename0: (groupid=0, jobs=1): err= 0: pid=1954863: Wed May 15 16:56:55 2024 00:35:50.043 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10174msec) 00:35:50.043 slat (usec): min=8, max=107, avg=38.08, stdev=31.38 00:35:50.044 clat (msec): min=138, max=456, avg=259.61, stdev=43.02 00:35:50.044 lat (msec): min=138, max=456, avg=259.64, stdev=43.03 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 138], 5.00th=[ 180], 10.00th=[ 215], 20.00th=[ 241], 00:35:50.044 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 266], 00:35:50.044 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 305], 95.00th=[ 330], 00:35:50.044 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 456], 99.95th=[ 456], 00:35:50.044 | 99.99th=[ 456] 00:35:50.044 bw ( KiB/s): min= 128, max= 256, per=4.21%, avg=243.20, stdev=36.93, samples=20 00:35:50.044 iops : min= 32, max= 64, avg=60.80, stdev= 9.23, samples=20 00:35:50.044 lat (msec) : 250=36.22%, 500=63.78% 00:35:50.044 cpu : usr=98.12%, sys=1.32%, ctx=45, majf=0, minf=21 00:35:50.044 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename0: (groupid=0, jobs=1): err= 0: pid=1954864: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10132msec) 00:35:50.044 slat (usec): min=8, max=119, avg=49.10, stdev=30.03 00:35:50.044 clat (msec): min=138, max=381, avg=266.17, stdev=42.15 00:35:50.044 lat (msec): min=138, max=381, avg=266.22, stdev=42.16 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 140], 5.00th=[ 180], 10.00th=[ 232], 20.00th=[ 247], 00:35:50.044 | 30.00th=[ 251], 40.00th=[ 257], 50.00th=[ 264], 60.00th=[ 268], 00:35:50.044 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 355], 00:35:50.044 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:35:50.044 | 99.99th=[ 380] 00:35:50.044 bw ( KiB/s): min= 128, max= 256, per=4.09%, avg=236.80, stdev=46.89, samples=20 00:35:50.044 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:35:50.044 lat (msec) : 250=31.41%, 500=68.59% 00:35:50.044 cpu : usr=98.00%, sys=1.47%, ctx=44, majf=0, minf=20 00:35:50.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename1: (groupid=0, jobs=1): err= 0: pid=1954865: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=50, BW=202KiB/s (206kB/s)(2048KiB/10156msec) 00:35:50.044 slat (usec): min=8, max=133, avg=26.54, stdev=23.99 00:35:50.044 clat (msec): min=150, max=508, avg=317.13, stdev=67.13 00:35:50.044 lat (msec): min=150, max=508, avg=317.16, stdev=67.12 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 153], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 257], 00:35:50.044 | 30.00th=[ 268], 40.00th=[ 292], 50.00th=[ 305], 60.00th=[ 342], 00:35:50.044 | 70.00th=[ 355], 80.00th=[ 384], 90.00th=[ 405], 95.00th=[ 418], 00:35:50.044 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 510], 99.95th=[ 510], 00:35:50.044 | 99.99th=[ 510] 00:35:50.044 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=198.40, stdev=59.28, samples=20 00:35:50.044 iops : min= 32, max= 64, avg=49.60, stdev=14.82, samples=20 00:35:50.044 lat (msec) : 250=12.50%, 500=86.33%, 750=1.17% 00:35:50.044 cpu : usr=97.84%, sys=1.47%, ctx=42, majf=0, minf=21 00:35:50.044 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename1: (groupid=0, jobs=1): err= 0: pid=1954866: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=56, BW=226KiB/s (231kB/s)(2296KiB/10157msec) 00:35:50.044 slat (usec): min=8, max=120, avg=18.51, stdev=10.91 00:35:50.044 clat (msec): min=168, max=456, avg=282.71, stdev=56.41 00:35:50.044 lat (msec): min=168, max=456, avg=282.73, stdev=56.41 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 239], 20.00th=[ 249], 00:35:50.044 | 30.00th=[ 253], 40.00th=[ 259], 50.00th=[ 266], 60.00th=[ 279], 00:35:50.044 | 70.00th=[ 300], 80.00th=[ 338], 90.00th=[ 380], 95.00th=[ 397], 00:35:50.044 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 456], 00:35:50.044 | 99.99th=[ 456] 00:35:50.044 bw ( KiB/s): min= 128, max= 256, per=3.87%, avg=223.20, stdev=56.50, samples=20 00:35:50.044 iops : min= 32, max= 64, avg=55.80, stdev=14.13, samples=20 00:35:50.044 lat (msec) : 250=24.74%, 500=75.26% 00:35:50.044 cpu : usr=97.49%, sys=1.65%, ctx=80, majf=0, minf=24 00:35:50.044 IO depths : 1=4.4%, 2=10.6%, 4=25.1%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename1: (groupid=0, jobs=1): err= 0: pid=1954867: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10116msec) 00:35:50.044 slat (usec): min=17, max=113, avg=52.54, stdev=26.25 00:35:50.044 clat (msec): min=177, max=533, avg=315.68, stdev=61.77 00:35:50.044 lat (msec): min=177, max=533, avg=315.73, stdev=61.79 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 203], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 257], 00:35:50.044 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 342], 00:35:50.044 | 70.00th=[ 347], 80.00th=[ 380], 90.00th=[ 401], 95.00th=[ 409], 00:35:50.044 | 99.00th=[ 518], 99.50th=[ 527], 99.90th=[ 535], 99.95th=[ 535], 00:35:50.044 | 99.99th=[ 535] 00:35:50.044 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=198.40, stdev=63.87, samples=20 00:35:50.044 iops : min= 32, max= 64, avg=49.60, stdev=15.97, samples=20 00:35:50.044 lat (msec) : 250=11.33%, 500=87.50%, 750=1.17% 00:35:50.044 cpu : usr=98.34%, sys=1.26%, ctx=10, majf=0, minf=23 00:35:50.044 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename1: (groupid=0, jobs=1): err= 0: pid=1954868: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=61, BW=247KiB/s (253kB/s)(2512KiB/10174msec) 00:35:50.044 slat (usec): min=12, max=150, avg=23.07, stdev=16.11 00:35:50.044 clat (msec): min=138, max=385, avg=258.11, stdev=43.41 00:35:50.044 lat (msec): min=138, max=385, avg=258.13, stdev=43.41 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 138], 5.00th=[ 180], 10.00th=[ 211], 20.00th=[ 232], 00:35:50.044 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 264], 00:35:50.044 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 309], 95.00th=[ 326], 00:35:50.044 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:35:50.044 | 99.99th=[ 388] 00:35:50.044 bw ( KiB/s): min= 128, max= 272, per=4.23%, avg=244.80, stdev=30.31, samples=20 00:35:50.044 iops : min= 32, max= 68, avg=61.20, stdev= 7.58, samples=20 00:35:50.044 lat (msec) : 250=37.90%, 500=62.10% 00:35:50.044 cpu : usr=97.22%, sys=1.94%, ctx=24, majf=0, minf=20 00:35:50.044 IO depths : 1=2.5%, 2=5.9%, 4=15.9%, 8=65.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename1: (groupid=0, jobs=1): err= 0: pid=1954869: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10174msec) 00:35:50.044 slat (usec): min=4, max=142, avg=54.02, stdev=29.97 00:35:50.044 clat (msec): min=117, max=420, avg=259.71, stdev=47.87 00:35:50.044 lat (msec): min=117, max=420, avg=259.76, stdev=47.87 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 117], 5.00th=[ 146], 10.00th=[ 213], 20.00th=[ 239], 00:35:50.044 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 262], 00:35:50.044 | 70.00th=[ 268], 80.00th=[ 296], 90.00th=[ 313], 95.00th=[ 338], 00:35:50.044 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:35:50.044 | 99.99th=[ 422] 00:35:50.044 bw ( KiB/s): min= 128, max= 384, per=4.21%, avg=243.20, stdev=53.85, samples=20 00:35:50.044 iops : min= 32, max= 96, avg=60.80, stdev=13.46, samples=20 00:35:50.044 lat (msec) : 250=33.33%, 500=66.67% 00:35:50.044 cpu : usr=97.91%, sys=1.30%, ctx=79, majf=0, minf=30 00:35:50.044 IO depths : 1=1.9%, 2=6.2%, 4=19.2%, 8=62.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.044 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.044 filename1: (groupid=0, jobs=1): err= 0: pid=1954870: Wed May 15 16:56:55 2024 00:35:50.044 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10175msec) 00:35:50.044 slat (usec): min=8, max=119, avg=37.93, stdev=27.97 00:35:50.044 clat (msec): min=164, max=353, avg=259.65, stdev=32.95 00:35:50.044 lat (msec): min=164, max=354, avg=259.69, stdev=32.95 00:35:50.044 clat percentiles (msec): 00:35:50.044 | 1.00th=[ 165], 5.00th=[ 213], 10.00th=[ 215], 20.00th=[ 241], 00:35:50.044 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 262], 00:35:50.044 | 70.00th=[ 268], 80.00th=[ 279], 90.00th=[ 300], 95.00th=[ 309], 00:35:50.044 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 355], 00:35:50.044 | 99.99th=[ 355] 00:35:50.044 bw ( KiB/s): min= 144, max= 272, per=4.21%, avg=243.20, stdev=34.67, samples=20 00:35:50.044 iops : min= 36, max= 68, avg=60.80, stdev= 8.67, samples=20 00:35:50.044 lat (msec) : 250=35.10%, 500=64.90% 00:35:50.044 cpu : usr=98.09%, sys=1.32%, ctx=34, majf=0, minf=28 00:35:50.044 IO depths : 1=3.2%, 2=9.3%, 4=24.5%, 8=53.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:50.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename1: (groupid=0, jobs=1): err= 0: pid=1954871: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=74, BW=299KiB/s (306kB/s)(3040KiB/10174msec) 00:35:50.045 slat (nsec): min=8289, max=32032, avg=11409.89, stdev=4401.53 00:35:50.045 clat (msec): min=108, max=376, avg=212.69, stdev=47.97 00:35:50.045 lat (msec): min=108, max=376, avg=212.70, stdev=47.97 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 111], 5.00th=[ 138], 10.00th=[ 161], 20.00th=[ 167], 00:35:50.045 | 30.00th=[ 178], 40.00th=[ 192], 50.00th=[ 211], 60.00th=[ 236], 00:35:50.045 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 268], 00:35:50.045 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:35:50.045 | 99.99th=[ 376] 00:35:50.045 bw ( KiB/s): min= 208, max= 384, per=5.15%, avg=297.60, stdev=61.07, samples=20 00:35:50.045 iops : min= 52, max= 96, avg=74.40, stdev=15.27, samples=20 00:35:50.045 lat (msec) : 250=73.16%, 500=26.84% 00:35:50.045 cpu : usr=97.74%, sys=1.56%, ctx=76, majf=0, minf=24 00:35:50.045 IO depths : 1=0.3%, 2=0.7%, 4=8.6%, 8=77.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename1: (groupid=0, jobs=1): err= 0: pid=1954872: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=50, BW=202KiB/s (206kB/s)(2048KiB/10156msec) 00:35:50.045 slat (nsec): min=8775, max=79589, avg=25504.83, stdev=10367.46 00:35:50.045 clat (msec): min=184, max=521, avg=317.14, stdev=69.03 00:35:50.045 lat (msec): min=184, max=521, avg=317.17, stdev=69.03 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 184], 5.00th=[ 203], 10.00th=[ 251], 20.00th=[ 257], 00:35:50.045 | 30.00th=[ 266], 40.00th=[ 292], 50.00th=[ 300], 60.00th=[ 342], 00:35:50.045 | 70.00th=[ 351], 80.00th=[ 384], 90.00th=[ 409], 95.00th=[ 443], 00:35:50.045 | 99.00th=[ 498], 99.50th=[ 518], 99.90th=[ 523], 99.95th=[ 523], 00:35:50.045 | 99.99th=[ 523] 00:35:50.045 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=198.40, stdev=63.87, samples=20 00:35:50.045 iops : min= 32, max= 64, avg=49.60, stdev=15.97, samples=20 00:35:50.045 lat (msec) : 250=10.16%, 500=89.06%, 750=0.78% 00:35:50.045 cpu : usr=98.13%, sys=1.30%, ctx=27, majf=0, minf=26 00:35:50.045 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename2: (groupid=0, jobs=1): err= 0: pid=1954873: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10163msec) 00:35:50.045 slat (nsec): min=8489, max=95194, avg=23245.72, stdev=11680.99 00:35:50.045 clat (msec): min=167, max=391, avg=266.24, stdev=30.45 00:35:50.045 lat (msec): min=167, max=391, avg=266.26, stdev=30.45 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 205], 5.00th=[ 232], 10.00th=[ 241], 20.00th=[ 247], 00:35:50.045 | 30.00th=[ 249], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 266], 00:35:50.045 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 305], 95.00th=[ 309], 00:35:50.045 | 99.00th=[ 355], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 393], 00:35:50.045 | 99.99th=[ 393] 00:35:50.045 bw ( KiB/s): min= 128, max= 256, per=4.09%, avg=236.80, stdev=46.89, samples=20 00:35:50.045 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:35:50.045 lat (msec) : 250=32.57%, 500=67.43% 00:35:50.045 cpu : usr=98.11%, sys=1.21%, ctx=37, majf=0, minf=21 00:35:50.045 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename2: (groupid=0, jobs=1): err= 0: pid=1954874: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=77, BW=310KiB/s (317kB/s)(3160KiB/10196msec) 00:35:50.045 slat (usec): min=8, max=102, avg=23.38, stdev=21.72 00:35:50.045 clat (msec): min=3, max=388, avg=205.55, stdev=74.94 00:35:50.045 lat (msec): min=3, max=388, avg=205.57, stdev=74.93 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 124], 20.00th=[ 167], 00:35:50.045 | 30.00th=[ 178], 40.00th=[ 197], 50.00th=[ 222], 60.00th=[ 234], 00:35:50.045 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 268], 95.00th=[ 305], 00:35:50.045 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:35:50.045 | 99.99th=[ 388] 00:35:50.045 bw ( KiB/s): min= 224, max= 768, per=5.36%, avg=309.60, stdev=119.65, samples=20 00:35:50.045 iops : min= 56, max= 192, avg=77.40, stdev=29.91, samples=20 00:35:50.045 lat (msec) : 4=0.25%, 10=3.80%, 20=2.03%, 100=2.03%, 250=65.19% 00:35:50.045 lat (msec) : 500=26.71% 00:35:50.045 cpu : usr=98.40%, sys=1.17%, ctx=30, majf=0, minf=26 00:35:50.045 IO depths : 1=0.6%, 2=2.4%, 4=11.0%, 8=73.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=90.1%, 8=4.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename2: (groupid=0, jobs=1): err= 0: pid=1954875: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=60, BW=241KiB/s (247kB/s)(2456KiB/10175msec) 00:35:50.045 slat (usec): min=8, max=105, avg=42.39, stdev=32.18 00:35:50.045 clat (msec): min=180, max=433, avg=263.90, stdev=33.54 00:35:50.045 lat (msec): min=180, max=434, avg=263.94, stdev=33.56 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 201], 5.00th=[ 213], 10.00th=[ 228], 20.00th=[ 245], 00:35:50.045 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 257], 60.00th=[ 262], 00:35:50.045 | 70.00th=[ 268], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 317], 00:35:50.045 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 435], 99.95th=[ 435], 00:35:50.045 | 99.99th=[ 435] 00:35:50.045 bw ( KiB/s): min= 128, max= 256, per=4.14%, avg=239.20, stdev=38.31, samples=20 00:35:50.045 iops : min= 32, max= 64, avg=59.80, stdev= 9.58, samples=20 00:35:50.045 lat (msec) : 250=34.36%, 500=65.64% 00:35:50.045 cpu : usr=97.99%, sys=1.39%, ctx=26, majf=0, minf=35 00:35:50.045 IO depths : 1=2.1%, 2=6.2%, 4=18.4%, 8=62.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename2: (groupid=0, jobs=1): err= 0: pid=1954876: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=50, BW=202KiB/s (206kB/s)(2048KiB/10156msec) 00:35:50.045 slat (usec): min=8, max=149, avg=52.77, stdev=31.00 00:35:50.045 clat (msec): min=184, max=519, avg=316.90, stdev=64.80 00:35:50.045 lat (msec): min=184, max=519, avg=316.95, stdev=64.78 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 184], 5.00th=[ 230], 10.00th=[ 251], 20.00th=[ 257], 00:35:50.045 | 30.00th=[ 275], 40.00th=[ 292], 50.00th=[ 309], 60.00th=[ 342], 00:35:50.045 | 70.00th=[ 351], 80.00th=[ 384], 90.00th=[ 409], 95.00th=[ 422], 00:35:50.045 | 99.00th=[ 443], 99.50th=[ 477], 99.90th=[ 518], 99.95th=[ 518], 00:35:50.045 | 99.99th=[ 518] 00:35:50.045 bw ( KiB/s): min= 128, max= 272, per=3.43%, avg=198.40, stdev=61.07, samples=20 00:35:50.045 iops : min= 32, max= 68, avg=49.60, stdev=15.27, samples=20 00:35:50.045 lat (msec) : 250=9.57%, 500=90.04%, 750=0.39% 00:35:50.045 cpu : usr=98.01%, sys=1.34%, ctx=70, majf=0, minf=26 00:35:50.045 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename2: (groupid=0, jobs=1): err= 0: pid=1954877: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10160msec) 00:35:50.045 slat (usec): min=4, max=116, avg=20.17, stdev=13.01 00:35:50.045 clat (msec): min=152, max=446, avg=267.18, stdev=41.11 00:35:50.045 lat (msec): min=153, max=446, avg=267.20, stdev=41.11 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 184], 5.00th=[ 215], 10.00th=[ 232], 20.00th=[ 243], 00:35:50.045 | 30.00th=[ 249], 40.00th=[ 257], 50.00th=[ 259], 60.00th=[ 266], 00:35:50.045 | 70.00th=[ 271], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 368], 00:35:50.045 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 447], 99.95th=[ 447], 00:35:50.045 | 99.99th=[ 447] 00:35:50.045 bw ( KiB/s): min= 128, max= 256, per=4.09%, avg=236.80, stdev=42.68, samples=20 00:35:50.045 iops : min= 32, max= 64, avg=59.20, stdev=10.67, samples=20 00:35:50.045 lat (msec) : 250=34.54%, 500=65.46% 00:35:50.045 cpu : usr=97.78%, sys=1.49%, ctx=35, majf=0, minf=17 00:35:50.045 IO depths : 1=1.6%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.045 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.045 filename2: (groupid=0, jobs=1): err= 0: pid=1954878: Wed May 15 16:56:55 2024 00:35:50.045 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10155msec) 00:35:50.045 slat (nsec): min=9064, max=47146, avg=25555.90, stdev=7599.04 00:35:50.045 clat (msec): min=179, max=499, avg=317.11, stdev=62.98 00:35:50.045 lat (msec): min=179, max=499, avg=317.13, stdev=62.98 00:35:50.045 clat percentiles (msec): 00:35:50.045 | 1.00th=[ 203], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 257], 00:35:50.045 | 30.00th=[ 271], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 317], 00:35:50.045 | 70.00th=[ 351], 80.00th=[ 384], 90.00th=[ 409], 95.00th=[ 443], 00:35:50.046 | 99.00th=[ 489], 99.50th=[ 498], 99.90th=[ 502], 99.95th=[ 502], 00:35:50.046 | 99.99th=[ 502] 00:35:50.046 bw ( KiB/s): min= 128, max= 272, per=3.43%, avg=198.40, stdev=59.51, samples=20 00:35:50.046 iops : min= 32, max= 68, avg=49.60, stdev=14.88, samples=20 00:35:50.046 lat (msec) : 250=5.47%, 500=94.53% 00:35:50.046 cpu : usr=98.36%, sys=1.26%, ctx=15, majf=0, minf=24 00:35:50.046 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:50.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.046 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.046 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.046 filename2: (groupid=0, jobs=1): err= 0: pid=1954879: Wed May 15 16:56:55 2024 00:35:50.046 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10154msec) 00:35:50.046 slat (usec): min=8, max=109, avg=62.09, stdev=22.45 00:35:50.046 clat (msec): min=149, max=500, avg=316.79, stdev=66.63 00:35:50.046 lat (msec): min=149, max=500, avg=316.86, stdev=66.63 00:35:50.046 clat percentiles (msec): 00:35:50.046 | 1.00th=[ 155], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 257], 00:35:50.046 | 30.00th=[ 266], 40.00th=[ 292], 50.00th=[ 305], 60.00th=[ 342], 00:35:50.046 | 70.00th=[ 355], 80.00th=[ 384], 90.00th=[ 405], 95.00th=[ 418], 00:35:50.046 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:35:50.046 | 99.99th=[ 502] 00:35:50.046 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=198.40, stdev=63.87, samples=20 00:35:50.046 iops : min= 32, max= 64, avg=49.60, stdev=15.97, samples=20 00:35:50.046 lat (msec) : 250=12.50%, 500=87.11%, 750=0.39% 00:35:50.046 cpu : usr=98.23%, sys=1.23%, ctx=118, majf=0, minf=24 00:35:50.046 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:50.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.046 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.046 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.046 filename2: (groupid=0, jobs=1): err= 0: pid=1954880: Wed May 15 16:56:55 2024 00:35:50.046 read: IOPS=76, BW=304KiB/s (312kB/s)(3096KiB/10174msec) 00:35:50.046 slat (nsec): min=8021, max=50113, avg=12433.77, stdev=5267.97 00:35:50.046 clat (msec): min=137, max=376, avg=209.40, stdev=42.60 00:35:50.046 lat (msec): min=137, max=376, avg=209.41, stdev=42.60 00:35:50.046 clat percentiles (msec): 00:35:50.046 | 1.00th=[ 138], 5.00th=[ 148], 10.00th=[ 153], 20.00th=[ 165], 00:35:50.046 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 205], 60.00th=[ 230], 00:35:50.046 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 259], 95.00th=[ 266], 00:35:50.046 | 99.00th=[ 321], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:35:50.046 | 99.99th=[ 376] 00:35:50.046 bw ( KiB/s): min= 256, max= 384, per=5.25%, avg=303.20, stdev=60.20, samples=20 00:35:50.046 iops : min= 64, max= 96, avg=75.80, stdev=15.05, samples=20 00:35:50.046 lat (msec) : 250=78.81%, 500=21.19% 00:35:50.046 cpu : usr=98.09%, sys=1.54%, ctx=15, majf=0, minf=32 00:35:50.046 IO depths : 1=0.3%, 2=3.9%, 4=17.1%, 8=66.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:50.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.046 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.046 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:50.046 00:35:50.046 Run status group 0 (all jobs): 00:35:50.046 READ: bw=5769KiB/s (5908kB/s), 202KiB/s-333KiB/s (206kB/s-341kB/s), io=57.4MiB (60.2MB), run=10116-10196msec 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 bdev_null0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 [2024-05-15 16:56:56.335512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 bdev_null1 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.046 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.047 { 00:35:50.047 "params": { 00:35:50.047 "name": "Nvme$subsystem", 00:35:50.047 "trtype": "$TEST_TRANSPORT", 00:35:50.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.047 "adrfam": "ipv4", 00:35:50.047 "trsvcid": "$NVMF_PORT", 00:35:50.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.047 "hdgst": ${hdgst:-false}, 00:35:50.047 "ddgst": ${ddgst:-false} 00:35:50.047 }, 00:35:50.047 "method": "bdev_nvme_attach_controller" 00:35:50.047 } 00:35:50.047 EOF 00:35:50.047 )") 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.047 { 00:35:50.047 "params": { 00:35:50.047 "name": "Nvme$subsystem", 00:35:50.047 "trtype": "$TEST_TRANSPORT", 00:35:50.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.047 "adrfam": "ipv4", 00:35:50.047 "trsvcid": "$NVMF_PORT", 00:35:50.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.047 "hdgst": ${hdgst:-false}, 00:35:50.047 "ddgst": ${ddgst:-false} 00:35:50.047 }, 00:35:50.047 "method": "bdev_nvme_attach_controller" 00:35:50.047 } 00:35:50.047 EOF 00:35:50.047 )") 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:50.047 "params": { 00:35:50.047 "name": "Nvme0", 00:35:50.047 "trtype": "tcp", 00:35:50.047 "traddr": "10.0.0.2", 00:35:50.047 "adrfam": "ipv4", 00:35:50.047 "trsvcid": "4420", 00:35:50.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.047 "hdgst": false, 00:35:50.047 "ddgst": false 00:35:50.047 }, 00:35:50.047 "method": "bdev_nvme_attach_controller" 00:35:50.047 },{ 00:35:50.047 "params": { 00:35:50.047 "name": "Nvme1", 00:35:50.047 "trtype": "tcp", 00:35:50.047 "traddr": "10.0.0.2", 00:35:50.047 "adrfam": "ipv4", 00:35:50.047 "trsvcid": "4420", 00:35:50.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:50.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:50.047 "hdgst": false, 00:35:50.047 "ddgst": false 00:35:50.047 }, 00:35:50.047 "method": "bdev_nvme_attach_controller" 00:35:50.047 }' 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:50.047 16:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.047 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:50.047 ... 00:35:50.047 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:50.047 ... 00:35:50.047 fio-3.35 00:35:50.047 Starting 4 threads 00:35:50.047 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.323 00:35:55.323 filename0: (groupid=0, jobs=1): err= 0: pid=1956379: Wed May 15 16:57:02 2024 00:35:55.323 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5002msec) 00:35:55.323 slat (nsec): min=3956, max=68200, avg=20164.98, stdev=10714.55 00:35:55.323 clat (usec): min=900, max=7828, avg=4302.03, stdev=524.52 00:35:55.323 lat (usec): min=912, max=7858, avg=4322.20, stdev=524.09 00:35:55.323 clat percentiles (usec): 00:35:55.323 | 1.00th=[ 3064], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 4047], 00:35:55.323 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:35:55.323 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5211], 00:35:55.323 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7439], 00:35:55.323 | 99.99th=[ 7832] 00:35:55.323 bw ( KiB/s): min=14080, max=14960, per=24.77%, avg=14613.33, stdev=251.33, samples=9 00:35:55.323 iops : min= 1760, max= 1870, avg=1826.67, stdev=31.42, samples=9 00:35:55.323 lat (usec) : 1000=0.01% 00:35:55.323 lat (msec) : 2=0.12%, 4=16.74%, 10=83.13% 00:35:55.323 cpu : usr=93.92%, sys=5.62%, ctx=24, majf=0, minf=49 00:35:55.323 IO depths : 1=0.1%, 2=11.2%, 4=62.0%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.323 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.323 issued rwts: total=9159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.323 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.323 filename0: (groupid=0, jobs=1): err= 0: pid=1956380: Wed May 15 16:57:02 2024 00:35:55.323 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5003msec) 00:35:55.323 slat (usec): min=4, max=273, avg=21.40, stdev=12.26 00:35:55.323 clat (usec): min=721, max=8047, avg=4244.33, stdev=524.26 00:35:55.323 lat (usec): min=738, max=8060, avg=4265.73, stdev=524.33 00:35:55.323 clat percentiles (usec): 00:35:55.323 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 3982], 00:35:55.323 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:55.323 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5080], 00:35:55.323 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7570], 99.95th=[ 7832], 00:35:55.323 | 99.99th=[ 8029] 00:35:55.323 bw ( KiB/s): min=14080, max=15136, per=25.11%, avg=14817.60, stdev=367.61, samples=10 00:35:55.323 iops : min= 1760, max= 1892, avg=1852.20, stdev=45.95, samples=10 00:35:55.323 lat (usec) : 750=0.01%, 1000=0.04% 00:35:55.323 lat (msec) : 2=0.12%, 4=20.30%, 10=79.52% 00:35:55.323 cpu : usr=93.18%, sys=4.94%, ctx=30, majf=0, minf=38 00:35:55.323 IO depths : 1=0.1%, 2=13.6%, 4=59.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.323 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.323 issued rwts: total=9269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.323 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.323 filename1: (groupid=0, jobs=1): err= 0: pid=1956381: Wed May 15 16:57:02 2024 00:35:55.323 read: IOPS=1847, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5001msec) 00:35:55.323 slat (nsec): min=3868, max=66159, avg=19685.56, stdev=10456.04 00:35:55.323 clat (usec): min=763, max=7228, avg=4263.58, stdev=518.10 00:35:55.323 lat (usec): min=781, max=7244, avg=4283.26, stdev=518.32 00:35:55.323 clat percentiles (usec): 00:35:55.323 | 1.00th=[ 2868], 5.00th=[ 3589], 10.00th=[ 3818], 20.00th=[ 4015], 00:35:55.323 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:55.323 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 5080], 00:35:55.323 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6915], 99.95th=[ 7046], 00:35:55.323 | 99.99th=[ 7242] 00:35:55.323 bw ( KiB/s): min=14336, max=15136, per=24.99%, avg=14746.67, stdev=275.97, samples=9 00:35:55.323 iops : min= 1792, max= 1892, avg=1843.33, stdev=34.50, samples=9 00:35:55.323 lat (usec) : 1000=0.01% 00:35:55.323 lat (msec) : 2=0.18%, 4=19.04%, 10=80.77% 00:35:55.323 cpu : usr=93.68%, sys=5.84%, ctx=9, majf=0, minf=52 00:35:55.323 IO depths : 1=0.2%, 2=11.1%, 4=62.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.323 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.323 issued rwts: total=9239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.323 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.323 filename1: (groupid=0, jobs=1): err= 0: pid=1956382: Wed May 15 16:57:02 2024 00:35:55.324 read: IOPS=1845, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5001msec) 00:35:55.324 slat (nsec): min=4197, max=67089, avg=21427.75, stdev=10707.40 00:35:55.324 clat (usec): min=761, max=7880, avg=4260.93, stdev=496.03 00:35:55.324 lat (usec): min=779, max=7895, avg=4282.36, stdev=495.88 00:35:55.324 clat percentiles (usec): 00:35:55.324 | 1.00th=[ 3163], 5.00th=[ 3720], 10.00th=[ 3884], 20.00th=[ 4015], 00:35:55.324 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:55.324 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4817], 00:35:55.324 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7504], 00:35:55.324 | 99.99th=[ 7898] 00:35:55.324 bw ( KiB/s): min=14108, max=15872, per=24.97%, avg=14733.78, stdev=487.03, samples=9 00:35:55.324 iops : min= 1763, max= 1984, avg=1841.67, stdev=60.96, samples=9 00:35:55.324 lat (usec) : 1000=0.05% 00:35:55.324 lat (msec) : 2=0.27%, 4=19.10%, 10=80.57% 00:35:55.324 cpu : usr=91.48%, sys=6.60%, ctx=164, majf=0, minf=77 00:35:55.324 IO depths : 1=0.1%, 2=11.8%, 4=61.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.324 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.324 issued rwts: total=9230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.324 00:35:55.324 Run status group 0 (all jobs): 00:35:55.324 READ: bw=57.6MiB/s (60.4MB/s), 14.3MiB/s-14.5MiB/s (15.0MB/s-15.2MB/s), io=288MiB (302MB), run=5001-5003msec 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 00:35:55.581 real 0m24.507s 00:35:55.581 user 4m35.850s 00:35:55.581 sys 0m6.967s 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 ************************************ 00:35:55.581 END TEST fio_dif_rand_params 00:35:55.581 ************************************ 00:35:55.581 16:57:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:55.581 16:57:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:55.581 16:57:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 ************************************ 00:35:55.581 START TEST fio_dif_digest 00:35:55.581 ************************************ 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 bdev_null0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.581 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.582 [2024-05-15 16:57:02.754229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:55.582 { 00:35:55.582 "params": { 00:35:55.582 "name": "Nvme$subsystem", 00:35:55.582 "trtype": "$TEST_TRANSPORT", 00:35:55.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.582 "adrfam": "ipv4", 00:35:55.582 "trsvcid": "$NVMF_PORT", 00:35:55.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.582 "hdgst": ${hdgst:-false}, 00:35:55.582 "ddgst": ${ddgst:-false} 00:35:55.582 }, 00:35:55.582 "method": "bdev_nvme_attach_controller" 00:35:55.582 } 00:35:55.582 EOF 00:35:55.582 )") 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:55.582 "params": { 00:35:55.582 "name": "Nvme0", 00:35:55.582 "trtype": "tcp", 00:35:55.582 "traddr": "10.0.0.2", 00:35:55.582 "adrfam": "ipv4", 00:35:55.582 "trsvcid": "4420", 00:35:55.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.582 "hdgst": true, 00:35:55.582 "ddgst": true 00:35:55.582 }, 00:35:55.582 "method": "bdev_nvme_attach_controller" 00:35:55.582 }' 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.582 16:57:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.840 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:55.840 ... 00:35:55.840 fio-3.35 00:35:55.840 Starting 3 threads 00:35:55.840 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.029 00:36:08.029 filename0: (groupid=0, jobs=1): err= 0: pid=1957244: Wed May 15 16:57:13 2024 00:36:08.029 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(267MiB/10047msec) 00:36:08.029 slat (nsec): min=5249, max=62163, avg=18260.24, stdev=3523.63 00:36:08.029 clat (usec): min=10569, max=57372, avg=14094.47, stdev=2173.88 00:36:08.029 lat (usec): min=10596, max=57393, avg=14112.73, stdev=2173.88 00:36:08.029 clat percentiles (usec): 00:36:08.029 | 1.00th=[11600], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:36:08.029 | 30.00th=[13435], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:36:08.029 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:36:08.029 | 99.00th=[16581], 99.50th=[17171], 99.90th=[55837], 99.95th=[56361], 00:36:08.029 | 99.99th=[57410] 00:36:08.029 bw ( KiB/s): min=25600, max=28160, per=34.06%, avg=27251.20, stdev=547.64, samples=20 00:36:08.029 iops : min= 200, max= 220, avg=212.90, stdev= 4.28, samples=20 00:36:08.029 lat (msec) : 20=99.77%, 50=0.05%, 100=0.19% 00:36:08.029 cpu : usr=90.42%, sys=8.84%, ctx=19, majf=0, minf=128 00:36:08.029 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.029 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.029 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:08.029 filename0: (groupid=0, jobs=1): err= 0: pid=1957245: Wed May 15 16:57:13 2024 00:36:08.029 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10045msec) 00:36:08.029 slat (nsec): min=5409, max=38957, avg=16376.24, stdev=3809.97 00:36:08.029 clat (usec): min=9050, max=52667, avg=13910.55, stdev=1559.41 00:36:08.029 lat (usec): min=9064, max=52685, avg=13926.92, stdev=1559.30 00:36:08.029 clat percentiles (usec): 00:36:08.029 | 1.00th=[11076], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:36:08.029 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:36:08.029 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:36:08.029 | 99.00th=[16450], 99.50th=[16712], 99.90th=[20055], 99.95th=[47973], 00:36:08.029 | 99.99th=[52691] 00:36:08.029 bw ( KiB/s): min=26624, max=28928, per=34.53%, avg=27625.10, stdev=656.73, samples=20 00:36:08.029 iops : min= 208, max= 226, avg=215.80, stdev= 5.15, samples=20 00:36:08.029 lat (msec) : 10=0.37%, 20=99.40%, 50=0.19%, 100=0.05% 00:36:08.029 cpu : usr=89.40%, sys=8.64%, ctx=510, majf=0, minf=137 00:36:08.029 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.029 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.029 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:08.029 filename0: (groupid=0, jobs=1): err= 0: pid=1957246: Wed May 15 16:57:13 2024 00:36:08.029 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(249MiB/10048msec) 00:36:08.029 slat (nsec): min=5306, max=41833, avg=17471.55, stdev=3999.79 00:36:08.029 clat (usec): min=8921, max=58922, avg=15109.48, stdev=1692.23 00:36:08.029 lat (usec): min=8935, max=58955, avg=15126.95, stdev=1692.34 00:36:08.029 clat percentiles (usec): 00:36:08.029 | 1.00th=[12125], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:36:08.029 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:36:08.029 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:36:08.029 | 99.00th=[17957], 99.50th=[18220], 99.90th=[47973], 99.95th=[58983], 00:36:08.029 | 99.99th=[58983] 00:36:08.029 bw ( KiB/s): min=24576, max=26624, per=31.79%, avg=25433.60, stdev=527.10, samples=20 00:36:08.029 iops : min= 192, max= 208, avg=198.70, stdev= 4.12, samples=20 00:36:08.029 lat (msec) : 10=0.30%, 20=99.60%, 50=0.05%, 100=0.05% 00:36:08.029 cpu : usr=91.60%, sys=7.85%, ctx=19, majf=0, minf=88 00:36:08.029 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.029 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.029 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:08.029 00:36:08.029 Run status group 0 (all jobs): 00:36:08.029 READ: bw=78.1MiB/s (81.9MB/s), 24.7MiB/s-26.9MiB/s (25.9MB/s-28.2MB/s), io=785MiB (823MB), run=10045-10048msec 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.029 00:36:08.029 real 0m11.051s 00:36:08.029 user 0m28.294s 00:36:08.029 sys 0m2.826s 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:08.029 16:57:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.029 ************************************ 00:36:08.029 END TEST fio_dif_digest 00:36:08.029 ************************************ 00:36:08.029 16:57:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:08.029 16:57:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:08.029 rmmod nvme_tcp 00:36:08.029 rmmod nvme_fabrics 00:36:08.029 rmmod nvme_keyring 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1951011 ']' 00:36:08.029 16:57:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1951011 00:36:08.029 16:57:13 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1951011 ']' 00:36:08.029 16:57:13 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1951011 00:36:08.029 16:57:13 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:08.029 16:57:13 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:08.029 16:57:13 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1951011 00:36:08.030 16:57:13 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:08.030 16:57:13 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:08.030 16:57:13 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1951011' 00:36:08.030 killing process with pid 1951011 00:36:08.030 16:57:13 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1951011 00:36:08.030 [2024-05-15 16:57:13.901437] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:08.030 16:57:13 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1951011 00:36:08.030 16:57:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:08.030 16:57:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:08.287 Waiting for block devices as requested 00:36:08.287 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:08.287 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:08.288 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:08.546 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:08.546 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:08.546 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:08.546 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.804 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.804 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:08.804 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:09.062 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:09.062 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:09.062 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:09.062 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:09.320 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:09.320 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:09.320 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:09.580 16:57:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:09.580 16:57:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:09.580 16:57:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:09.580 16:57:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:09.580 16:57:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.580 16:57:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:09.580 16:57:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.482 16:57:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:11.482 00:36:11.482 real 1m7.551s 00:36:11.482 user 6m30.991s 00:36:11.482 sys 0m19.870s 00:36:11.482 16:57:18 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:11.482 16:57:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:11.482 ************************************ 00:36:11.482 END TEST nvmf_dif 00:36:11.482 ************************************ 00:36:11.482 16:57:18 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:11.482 16:57:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:11.482 16:57:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:11.482 16:57:18 -- common/autotest_common.sh@10 -- # set +x 00:36:11.482 ************************************ 00:36:11.482 START TEST nvmf_abort_qd_sizes 00:36:11.482 ************************************ 00:36:11.482 16:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:11.740 * Looking for test storage... 00:36:11.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:11.740 16:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:14.281 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:14.281 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:14.281 Found net devices under 0000:09:00.0: cvl_0_0 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:14.281 Found net devices under 0000:09:00.1: cvl_0_1 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.281 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:14.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:36:14.282 00:36:14.282 --- 10.0.0.2 ping statistics --- 00:36:14.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.282 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:36:14.282 00:36:14.282 --- 10.0.0.1 ping statistics --- 00:36:14.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.282 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:14.282 16:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:15.655 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:15.655 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:15.655 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:15.655 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:15.655 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:15.655 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:15.655 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:15.656 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:15.656 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:16.592 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:16.850 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.850 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:16.850 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:16.850 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.850 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:16.850 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1963150 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1963150 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1963150 ']' 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:16.851 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:16.851 [2024-05-15 16:57:23.899059] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:36:16.851 [2024-05-15 16:57:23.899140] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.851 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.851 [2024-05-15 16:57:23.980042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:16.851 [2024-05-15 16:57:24.069687] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.851 [2024-05-15 16:57:24.069757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.851 [2024-05-15 16:57:24.069786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.851 [2024-05-15 16:57:24.069801] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.851 [2024-05-15 16:57:24.069813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.851 [2024-05-15 16:57:24.073241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.851 [2024-05-15 16:57:24.073288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:16.851 [2024-05-15 16:57:24.073369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:16.851 [2024-05-15 16:57:24.073372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:17.110 16:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.110 ************************************ 00:36:17.110 START TEST spdk_target_abort 00:36:17.110 ************************************ 00:36:17.110 16:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:17.110 16:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:17.110 16:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:36:17.110 16:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.110 16:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.446 spdk_targetn1 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.446 [2024-05-15 16:57:27.087553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.446 [2024-05-15 16:57:27.119549] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:20.446 [2024-05-15 16:57:27.119851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:20.446 16:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:20.446 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.717 Initializing NVMe Controllers 00:36:23.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:23.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:23.717 Initialization complete. Launching workers. 00:36:23.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10996, failed: 0 00:36:23.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1202, failed to submit 9794 00:36:23.717 success 790, unsuccess 412, failed 0 00:36:23.717 16:57:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:23.717 16:57:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.717 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.994 Initializing NVMe Controllers 00:36:26.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:26.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:26.994 Initialization complete. Launching workers. 00:36:26.994 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8617, failed: 0 00:36:26.994 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7401 00:36:26.994 success 328, unsuccess 888, failed 0 00:36:26.994 16:57:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:26.994 16:57:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:26.994 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.518 Initializing NVMe Controllers 00:36:29.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:29.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:29.518 Initialization complete. Launching workers. 00:36:29.518 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31510, failed: 0 00:36:29.518 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2701, failed to submit 28809 00:36:29.518 success 532, unsuccess 2169, failed 0 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.518 16:57:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1963150 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1963150 ']' 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1963150 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1963150 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1963150' 00:36:30.887 killing process with pid 1963150 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1963150 00:36:30.887 [2024-05-15 16:57:38.055262] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:30.887 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1963150 00:36:31.144 00:36:31.144 real 0m14.053s 00:36:31.144 user 0m53.058s 00:36:31.144 sys 0m2.637s 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.144 ************************************ 00:36:31.144 END TEST spdk_target_abort 00:36:31.144 ************************************ 00:36:31.144 16:57:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:31.144 16:57:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:31.144 16:57:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:31.144 16:57:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:31.144 ************************************ 00:36:31.144 START TEST kernel_target_abort 00:36:31.144 ************************************ 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:31.144 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:31.145 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:31.145 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:31.145 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:31.145 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:31.145 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:31.401 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:31.401 16:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:32.776 Waiting for block devices as requested 00:36:32.776 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:32.776 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:32.776 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:32.776 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:32.776 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:32.776 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:33.034 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:33.034 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:33.034 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:33.034 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:33.292 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:33.292 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:33.292 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:33.549 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:33.549 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:33.549 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:33.549 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:33.806 No valid GPT data, bailing 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:36:33.806 00:36:33.806 Discovery Log Number of Records 2, Generation counter 2 00:36:33.806 =====Discovery Log Entry 0====== 00:36:33.806 trtype: tcp 00:36:33.806 adrfam: ipv4 00:36:33.806 subtype: current discovery subsystem 00:36:33.806 treq: not specified, sq flow control disable supported 00:36:33.806 portid: 1 00:36:33.806 trsvcid: 4420 00:36:33.806 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:33.806 traddr: 10.0.0.1 00:36:33.806 eflags: none 00:36:33.806 sectype: none 00:36:33.806 =====Discovery Log Entry 1====== 00:36:33.806 trtype: tcp 00:36:33.806 adrfam: ipv4 00:36:33.806 subtype: nvme subsystem 00:36:33.806 treq: not specified, sq flow control disable supported 00:36:33.806 portid: 1 00:36:33.806 trsvcid: 4420 00:36:33.806 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:33.806 traddr: 10.0.0.1 00:36:33.806 eflags: none 00:36:33.806 sectype: none 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:33.806 16:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.806 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.079 Initializing NVMe Controllers 00:36:37.079 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.079 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.079 Initialization complete. Launching workers. 00:36:37.079 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34874, failed: 0 00:36:37.079 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34874, failed to submit 0 00:36:37.079 success 0, unsuccess 34874, failed 0 00:36:37.079 16:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.079 16:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.079 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.358 Initializing NVMe Controllers 00:36:40.358 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.358 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.358 Initialization complete. Launching workers. 00:36:40.358 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68593, failed: 0 00:36:40.358 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17302, failed to submit 51291 00:36:40.358 success 0, unsuccess 17302, failed 0 00:36:40.358 16:57:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.358 16:57:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.358 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.928 Initializing NVMe Controllers 00:36:42.928 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:42.928 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:42.928 Initialization complete. Launching workers. 00:36:42.928 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74614, failed: 0 00:36:42.928 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18606, failed to submit 56008 00:36:42.928 success 0, unsuccess 18606, failed 0 00:36:42.928 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:42.928 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:42.929 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:43.186 16:57:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:44.561 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:44.561 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:44.561 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:45.496 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:45.496 00:36:45.496 real 0m14.191s 00:36:45.496 user 0m5.486s 00:36:45.496 sys 0m3.451s 00:36:45.496 16:57:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:45.496 16:57:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.496 ************************************ 00:36:45.496 END TEST kernel_target_abort 00:36:45.496 ************************************ 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:45.496 rmmod nvme_tcp 00:36:45.496 rmmod nvme_fabrics 00:36:45.496 rmmod nvme_keyring 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1963150 ']' 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1963150 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1963150 ']' 00:36:45.496 16:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1963150 00:36:45.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1963150) - No such process 00:36:45.497 16:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1963150 is not found' 00:36:45.497 Process with pid 1963150 is not found 00:36:45.497 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:45.497 16:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:46.870 Waiting for block devices as requested 00:36:46.870 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:46.870 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:47.127 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:47.127 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:47.127 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:47.127 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:47.384 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:47.384 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:47.384 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:47.384 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:47.642 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:47.642 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:47.642 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:47.642 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:47.899 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:47.899 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:47.899 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:48.158 16:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.083 16:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:50.084 00:36:50.084 real 0m38.551s 00:36:50.084 user 1m1.074s 00:36:50.084 sys 0m10.043s 00:36:50.084 16:57:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:50.084 16:57:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:50.084 ************************************ 00:36:50.084 END TEST nvmf_abort_qd_sizes 00:36:50.084 ************************************ 00:36:50.084 16:57:57 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:50.084 16:57:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:50.084 16:57:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:50.084 16:57:57 -- common/autotest_common.sh@10 -- # set +x 00:36:50.084 ************************************ 00:36:50.084 START TEST keyring_file 00:36:50.084 ************************************ 00:36:50.084 16:57:57 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:50.341 * Looking for test storage... 00:36:50.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:50.341 16:57:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:50.341 16:57:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.341 16:57:57 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:50.341 16:57:57 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.341 16:57:57 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.342 16:57:57 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.342 16:57:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.342 16:57:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.342 16:57:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.342 16:57:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:50.342 16:57:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5Vl7dwSx0e 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5Vl7dwSx0e 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5Vl7dwSx0e 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5Vl7dwSx0e 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Zvmq05ljnT 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:50.342 16:57:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zvmq05ljnT 00:36:50.342 16:57:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Zvmq05ljnT 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Zvmq05ljnT 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1969188 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:50.342 16:57:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1969188 00:36:50.342 16:57:57 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1969188 ']' 00:36:50.342 16:57:57 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.342 16:57:57 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:50.342 16:57:57 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.342 16:57:57 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:50.342 16:57:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.342 [2024-05-15 16:57:57.455263] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:36:50.342 [2024-05-15 16:57:57.455361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969188 ] 00:36:50.342 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.342 [2024-05-15 16:57:57.524715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.600 [2024-05-15 16:57:57.606871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:50.859 16:57:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.859 [2024-05-15 16:57:57.876307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.859 null0 00:36:50.859 [2024-05-15 16:57:57.908291] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:50.859 [2024-05-15 16:57:57.908370] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:50.859 [2024-05-15 16:57:57.908737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:50.859 [2024-05-15 16:57:57.916341] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.859 16:57:57 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.859 [2024-05-15 16:57:57.924342] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:50.859 request: 00:36:50.859 { 00:36:50.859 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.859 "secure_channel": false, 00:36:50.859 "listen_address": { 00:36:50.859 "trtype": "tcp", 00:36:50.859 "traddr": "127.0.0.1", 00:36:50.859 "trsvcid": "4420" 00:36:50.859 }, 00:36:50.859 "method": "nvmf_subsystem_add_listener", 00:36:50.859 "req_id": 1 00:36:50.859 } 00:36:50.859 Got JSON-RPC error response 00:36:50.859 response: 00:36:50.859 { 00:36:50.859 "code": -32602, 00:36:50.859 "message": "Invalid parameters" 00:36:50.859 } 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:50.859 16:57:57 keyring_file -- keyring/file.sh@46 -- # bperfpid=1969200 00:36:50.859 16:57:57 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:50.859 16:57:57 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1969200 /var/tmp/bperf.sock 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1969200 ']' 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:50.859 16:57:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.859 [2024-05-15 16:57:57.970273] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:36:50.859 [2024-05-15 16:57:57.970353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969200 ] 00:36:50.859 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.859 [2024-05-15 16:57:58.042597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.117 [2024-05-15 16:57:58.132678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.117 16:57:58 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.117 16:57:58 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:51.117 16:57:58 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:51.117 16:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:51.374 16:57:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Zvmq05ljnT 00:36:51.374 16:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Zvmq05ljnT 00:36:51.632 16:57:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:51.632 16:57:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:51.632 16:57:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.632 16:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.632 16:57:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.889 16:57:58 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.5Vl7dwSx0e == \/\t\m\p\/\t\m\p\.\5\V\l\7\d\w\S\x\0\e ]] 00:36:51.889 16:57:58 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:51.889 16:57:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:51.889 16:57:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.889 16:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.889 16:57:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.147 16:57:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Zvmq05ljnT == \/\t\m\p\/\t\m\p\.\Z\v\m\q\0\5\l\j\n\T ]] 00:36:52.147 16:57:59 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:52.147 16:57:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.147 16:57:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.147 16:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.147 16:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.147 16:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.405 16:57:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:52.405 16:57:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:52.405 16:57:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:52.405 16:57:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.405 16:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.405 16:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.405 16:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.662 16:57:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:52.662 16:57:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.662 16:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.920 [2024-05-15 16:57:59.959281] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:52.920 nvme0n1 00:36:52.920 16:58:00 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:52.920 16:58:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.920 16:58:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.920 16:58:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.920 16:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.920 16:58:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.178 16:58:00 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:53.178 16:58:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:53.178 16:58:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.178 16:58:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.178 16:58:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.178 16:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.178 16:58:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.435 16:58:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:53.435 16:58:00 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:53.693 Running I/O for 1 seconds... 00:36:54.627 00:36:54.627 Latency(us) 00:36:54.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.627 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:54.627 nvme0n1 : 1.02 5486.71 21.43 0.00 0.00 23072.51 8543.95 33204.91 00:36:54.627 =================================================================================================================== 00:36:54.627 Total : 5486.71 21.43 0.00 0.00 23072.51 8543.95 33204.91 00:36:54.627 0 00:36:54.627 16:58:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:54.627 16:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:54.885 16:58:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:54.885 16:58:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:54.885 16:58:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.885 16:58:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.885 16:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.885 16:58:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.142 16:58:02 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:55.142 16:58:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:55.142 16:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:55.142 16:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.142 16:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.142 16:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:55.142 16:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.400 16:58:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:55.400 16:58:02 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.400 16:58:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:55.400 16:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:55.657 [2024-05-15 16:58:02.671886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:55.657 [2024-05-15 16:58:02.672437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde91d0 (107): Transport endpoint is not connected 00:36:55.657 [2024-05-15 16:58:02.673428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde91d0 (9): Bad file descriptor 00:36:55.657 [2024-05-15 16:58:02.674426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:55.657 [2024-05-15 16:58:02.674450] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:55.657 [2024-05-15 16:58:02.674463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:55.657 request: 00:36:55.657 { 00:36:55.657 "name": "nvme0", 00:36:55.657 "trtype": "tcp", 00:36:55.657 "traddr": "127.0.0.1", 00:36:55.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.657 "adrfam": "ipv4", 00:36:55.657 "trsvcid": "4420", 00:36:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.657 "psk": "key1", 00:36:55.657 "method": "bdev_nvme_attach_controller", 00:36:55.657 "req_id": 1 00:36:55.657 } 00:36:55.657 Got JSON-RPC error response 00:36:55.657 response: 00:36:55.657 { 00:36:55.657 "code": -32602, 00:36:55.657 "message": "Invalid parameters" 00:36:55.657 } 00:36:55.657 16:58:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:55.657 16:58:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.657 16:58:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.657 16:58:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.657 16:58:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:55.657 16:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.657 16:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.657 16:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.657 16:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.657 16:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.913 16:58:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:55.913 16:58:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:55.913 16:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:55.913 16:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.913 16:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.913 16:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.913 16:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:56.170 16:58:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:56.170 16:58:03 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:56.171 16:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:56.428 16:58:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:56.428 16:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:56.732 16:58:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:56.732 16:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.732 16:58:03 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:56.732 16:58:03 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:56.732 16:58:03 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.5Vl7dwSx0e 00:36:56.732 16:58:03 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.732 16:58:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:56.732 16:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:56.990 [2024-05-15 16:58:04.152373] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5Vl7dwSx0e': 0100660 00:36:56.990 [2024-05-15 16:58:04.152409] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:56.990 request: 00:36:56.990 { 00:36:56.990 "name": "key0", 00:36:56.990 "path": "/tmp/tmp.5Vl7dwSx0e", 00:36:56.990 "method": "keyring_file_add_key", 00:36:56.990 "req_id": 1 00:36:56.990 } 00:36:56.990 Got JSON-RPC error response 00:36:56.990 response: 00:36:56.990 { 00:36:56.990 "code": -1, 00:36:56.990 "message": "Operation not permitted" 00:36:56.990 } 00:36:56.990 16:58:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:56.990 16:58:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:56.990 16:58:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:56.990 16:58:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:56.990 16:58:04 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.5Vl7dwSx0e 00:36:56.990 16:58:04 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:56.990 16:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5Vl7dwSx0e 00:36:57.247 16:58:04 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.5Vl7dwSx0e 00:36:57.247 16:58:04 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:57.247 16:58:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:57.248 16:58:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:57.248 16:58:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.248 16:58:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.248 16:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.505 16:58:04 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:57.505 16:58:04 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.505 16:58:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:57.505 16:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:57.763 [2024-05-15 16:58:04.890384] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5Vl7dwSx0e': No such file or directory 00:36:57.763 [2024-05-15 16:58:04.890413] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:57.763 [2024-05-15 16:58:04.890455] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:57.763 [2024-05-15 16:58:04.890467] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:57.763 [2024-05-15 16:58:04.890479] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:57.763 request: 00:36:57.763 { 00:36:57.763 "name": "nvme0", 00:36:57.763 "trtype": "tcp", 00:36:57.763 "traddr": "127.0.0.1", 00:36:57.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.763 "adrfam": "ipv4", 00:36:57.763 "trsvcid": "4420", 00:36:57.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.763 "psk": "key0", 00:36:57.763 "method": "bdev_nvme_attach_controller", 00:36:57.763 "req_id": 1 00:36:57.763 } 00:36:57.763 Got JSON-RPC error response 00:36:57.763 response: 00:36:57.763 { 00:36:57.763 "code": -19, 00:36:57.763 "message": "No such device" 00:36:57.763 } 00:36:57.763 16:58:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:57.763 16:58:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:57.763 16:58:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:57.763 16:58:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:57.763 16:58:04 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:57.763 16:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:58.021 16:58:05 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dHd6hGIRGU 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:58.021 16:58:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:58.021 16:58:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:58.021 16:58:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:58.021 16:58:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:58.021 16:58:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:58.021 16:58:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dHd6hGIRGU 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dHd6hGIRGU 00:36:58.021 16:58:05 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dHd6hGIRGU 00:36:58.021 16:58:05 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dHd6hGIRGU 00:36:58.021 16:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dHd6hGIRGU 00:36:58.278 16:58:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.278 16:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.536 nvme0n1 00:36:58.536 16:58:05 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:58.536 16:58:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:58.536 16:58:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:58.536 16:58:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.536 16:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.536 16:58:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:58.794 16:58:05 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:58.794 16:58:05 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:58.794 16:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:59.052 16:58:06 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:59.052 16:58:06 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:59.052 16:58:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.052 16:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.052 16:58:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.310 16:58:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:59.310 16:58:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:59.310 16:58:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:59.310 16:58:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.310 16:58:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.310 16:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.310 16:58:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.567 16:58:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:59.567 16:58:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:59.567 16:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:59.825 16:58:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:59.825 16:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.825 16:58:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:00.083 16:58:07 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:00.083 16:58:07 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dHd6hGIRGU 00:37:00.083 16:58:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dHd6hGIRGU 00:37:00.341 16:58:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Zvmq05ljnT 00:37:00.341 16:58:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Zvmq05ljnT 00:37:00.599 16:58:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.599 16:58:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.857 nvme0n1 00:37:00.857 16:58:08 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:00.857 16:58:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:01.116 16:58:08 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:01.116 "subsystems": [ 00:37:01.116 { 00:37:01.116 "subsystem": "keyring", 00:37:01.116 "config": [ 00:37:01.116 { 00:37:01.116 "method": "keyring_file_add_key", 00:37:01.116 "params": { 00:37:01.116 "name": "key0", 00:37:01.116 "path": "/tmp/tmp.dHd6hGIRGU" 00:37:01.116 } 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "method": "keyring_file_add_key", 00:37:01.116 "params": { 00:37:01.116 "name": "key1", 00:37:01.116 "path": "/tmp/tmp.Zvmq05ljnT" 00:37:01.116 } 00:37:01.116 } 00:37:01.116 ] 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "subsystem": "iobuf", 00:37:01.116 "config": [ 00:37:01.116 { 00:37:01.116 "method": "iobuf_set_options", 00:37:01.116 "params": { 00:37:01.116 "small_pool_count": 8192, 00:37:01.116 "large_pool_count": 1024, 00:37:01.116 "small_bufsize": 8192, 00:37:01.116 "large_bufsize": 135168 00:37:01.116 } 00:37:01.116 } 00:37:01.116 ] 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "subsystem": "sock", 00:37:01.116 "config": [ 00:37:01.116 { 00:37:01.116 "method": "sock_impl_set_options", 00:37:01.116 "params": { 00:37:01.116 "impl_name": "posix", 00:37:01.116 "recv_buf_size": 2097152, 00:37:01.116 "send_buf_size": 2097152, 00:37:01.116 "enable_recv_pipe": true, 00:37:01.116 "enable_quickack": false, 00:37:01.116 "enable_placement_id": 0, 00:37:01.116 "enable_zerocopy_send_server": true, 00:37:01.116 "enable_zerocopy_send_client": false, 00:37:01.116 "zerocopy_threshold": 0, 00:37:01.116 "tls_version": 0, 00:37:01.116 "enable_ktls": false 00:37:01.116 } 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "method": "sock_impl_set_options", 00:37:01.116 "params": { 00:37:01.116 "impl_name": "ssl", 00:37:01.116 "recv_buf_size": 4096, 00:37:01.116 "send_buf_size": 4096, 00:37:01.116 "enable_recv_pipe": true, 00:37:01.116 "enable_quickack": false, 00:37:01.116 "enable_placement_id": 0, 00:37:01.116 "enable_zerocopy_send_server": true, 00:37:01.116 "enable_zerocopy_send_client": false, 00:37:01.116 "zerocopy_threshold": 0, 00:37:01.116 "tls_version": 0, 00:37:01.116 "enable_ktls": false 00:37:01.116 } 00:37:01.116 } 00:37:01.116 ] 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "subsystem": "vmd", 00:37:01.116 "config": [] 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "subsystem": "accel", 00:37:01.116 "config": [ 00:37:01.116 { 00:37:01.116 "method": "accel_set_options", 00:37:01.116 "params": { 00:37:01.116 "small_cache_size": 128, 00:37:01.116 "large_cache_size": 16, 00:37:01.116 "task_count": 2048, 00:37:01.116 "sequence_count": 2048, 00:37:01.116 "buf_count": 2048 00:37:01.116 } 00:37:01.116 } 00:37:01.116 ] 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "subsystem": "bdev", 00:37:01.116 "config": [ 00:37:01.116 { 00:37:01.116 "method": "bdev_set_options", 00:37:01.116 "params": { 00:37:01.116 "bdev_io_pool_size": 65535, 00:37:01.116 "bdev_io_cache_size": 256, 00:37:01.116 "bdev_auto_examine": true, 00:37:01.116 "iobuf_small_cache_size": 128, 00:37:01.116 "iobuf_large_cache_size": 16 00:37:01.116 } 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "method": "bdev_raid_set_options", 00:37:01.116 "params": { 00:37:01.116 "process_window_size_kb": 1024 00:37:01.116 } 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "method": "bdev_iscsi_set_options", 00:37:01.116 "params": { 00:37:01.116 "timeout_sec": 30 00:37:01.116 } 00:37:01.116 }, 00:37:01.116 { 00:37:01.116 "method": "bdev_nvme_set_options", 00:37:01.116 "params": { 00:37:01.116 "action_on_timeout": "none", 00:37:01.116 "timeout_us": 0, 00:37:01.116 "timeout_admin_us": 0, 00:37:01.116 "keep_alive_timeout_ms": 10000, 00:37:01.116 "arbitration_burst": 0, 00:37:01.116 "low_priority_weight": 0, 00:37:01.116 "medium_priority_weight": 0, 00:37:01.116 "high_priority_weight": 0, 00:37:01.116 "nvme_adminq_poll_period_us": 10000, 00:37:01.116 "nvme_ioq_poll_period_us": 0, 00:37:01.116 "io_queue_requests": 512, 00:37:01.116 "delay_cmd_submit": true, 00:37:01.116 "transport_retry_count": 4, 00:37:01.116 "bdev_retry_count": 3, 00:37:01.116 "transport_ack_timeout": 0, 00:37:01.116 "ctrlr_loss_timeout_sec": 0, 00:37:01.116 "reconnect_delay_sec": 0, 00:37:01.117 "fast_io_fail_timeout_sec": 0, 00:37:01.117 "disable_auto_failback": false, 00:37:01.117 "generate_uuids": false, 00:37:01.117 "transport_tos": 0, 00:37:01.117 "nvme_error_stat": false, 00:37:01.117 "rdma_srq_size": 0, 00:37:01.117 "io_path_stat": false, 00:37:01.117 "allow_accel_sequence": false, 00:37:01.117 "rdma_max_cq_size": 0, 00:37:01.117 "rdma_cm_event_timeout_ms": 0, 00:37:01.117 "dhchap_digests": [ 00:37:01.117 "sha256", 00:37:01.117 "sha384", 00:37:01.117 "sha512" 00:37:01.117 ], 00:37:01.117 "dhchap_dhgroups": [ 00:37:01.117 "null", 00:37:01.117 "ffdhe2048", 00:37:01.117 "ffdhe3072", 00:37:01.117 "ffdhe4096", 00:37:01.117 "ffdhe6144", 00:37:01.117 "ffdhe8192" 00:37:01.117 ] 00:37:01.117 } 00:37:01.117 }, 00:37:01.117 { 00:37:01.117 "method": "bdev_nvme_attach_controller", 00:37:01.117 "params": { 00:37:01.117 "name": "nvme0", 00:37:01.117 "trtype": "TCP", 00:37:01.117 "adrfam": "IPv4", 00:37:01.117 "traddr": "127.0.0.1", 00:37:01.117 "trsvcid": "4420", 00:37:01.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:01.117 "prchk_reftag": false, 00:37:01.117 "prchk_guard": false, 00:37:01.117 "ctrlr_loss_timeout_sec": 0, 00:37:01.117 "reconnect_delay_sec": 0, 00:37:01.117 "fast_io_fail_timeout_sec": 0, 00:37:01.117 "psk": "key0", 00:37:01.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:01.117 "hdgst": false, 00:37:01.117 "ddgst": false 00:37:01.117 } 00:37:01.117 }, 00:37:01.117 { 00:37:01.117 "method": "bdev_nvme_set_hotplug", 00:37:01.117 "params": { 00:37:01.117 "period_us": 100000, 00:37:01.117 "enable": false 00:37:01.117 } 00:37:01.117 }, 00:37:01.117 { 00:37:01.117 "method": "bdev_wait_for_examine" 00:37:01.117 } 00:37:01.117 ] 00:37:01.117 }, 00:37:01.117 { 00:37:01.117 "subsystem": "nbd", 00:37:01.117 "config": [] 00:37:01.117 } 00:37:01.117 ] 00:37:01.117 }' 00:37:01.117 16:58:08 keyring_file -- keyring/file.sh@114 -- # killprocess 1969200 00:37:01.117 16:58:08 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1969200 ']' 00:37:01.117 16:58:08 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1969200 00:37:01.117 16:58:08 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:01.117 16:58:08 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:01.117 16:58:08 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1969200 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1969200' 00:37:01.376 killing process with pid 1969200 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@965 -- # kill 1969200 00:37:01.376 Received shutdown signal, test time was about 1.000000 seconds 00:37:01.376 00:37:01.376 Latency(us) 00:37:01.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.376 =================================================================================================================== 00:37:01.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@970 -- # wait 1969200 00:37:01.376 16:58:08 keyring_file -- keyring/file.sh@117 -- # bperfpid=1970654 00:37:01.376 16:58:08 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1970654 /var/tmp/bperf.sock 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1970654 ']' 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:01.376 16:58:08 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:01.376 16:58:08 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:01.376 16:58:08 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:01.376 "subsystems": [ 00:37:01.376 { 00:37:01.376 "subsystem": "keyring", 00:37:01.376 "config": [ 00:37:01.376 { 00:37:01.376 "method": "keyring_file_add_key", 00:37:01.376 "params": { 00:37:01.376 "name": "key0", 00:37:01.376 "path": "/tmp/tmp.dHd6hGIRGU" 00:37:01.376 } 00:37:01.376 }, 00:37:01.376 { 00:37:01.376 "method": "keyring_file_add_key", 00:37:01.376 "params": { 00:37:01.376 "name": "key1", 00:37:01.376 "path": "/tmp/tmp.Zvmq05ljnT" 00:37:01.376 } 00:37:01.376 } 00:37:01.376 ] 00:37:01.376 }, 00:37:01.376 { 00:37:01.376 "subsystem": "iobuf", 00:37:01.376 "config": [ 00:37:01.376 { 00:37:01.376 "method": "iobuf_set_options", 00:37:01.376 "params": { 00:37:01.376 "small_pool_count": 8192, 00:37:01.376 "large_pool_count": 1024, 00:37:01.376 "small_bufsize": 8192, 00:37:01.376 "large_bufsize": 135168 00:37:01.376 } 00:37:01.376 } 00:37:01.376 ] 00:37:01.376 }, 00:37:01.376 { 00:37:01.376 "subsystem": "sock", 00:37:01.376 "config": [ 00:37:01.376 { 00:37:01.376 "method": "sock_impl_set_options", 00:37:01.376 "params": { 00:37:01.376 "impl_name": "posix", 00:37:01.376 "recv_buf_size": 2097152, 00:37:01.376 "send_buf_size": 2097152, 00:37:01.376 "enable_recv_pipe": true, 00:37:01.376 "enable_quickack": false, 00:37:01.376 "enable_placement_id": 0, 00:37:01.376 "enable_zerocopy_send_server": true, 00:37:01.376 "enable_zerocopy_send_client": false, 00:37:01.376 "zerocopy_threshold": 0, 00:37:01.376 "tls_version": 0, 00:37:01.376 "enable_ktls": false 00:37:01.376 } 00:37:01.376 }, 00:37:01.376 { 00:37:01.376 "method": "sock_impl_set_options", 00:37:01.376 "params": { 00:37:01.376 "impl_name": "ssl", 00:37:01.376 "recv_buf_size": 4096, 00:37:01.376 "send_buf_size": 4096, 00:37:01.376 "enable_recv_pipe": true, 00:37:01.376 "enable_quickack": false, 00:37:01.376 "enable_placement_id": 0, 00:37:01.376 "enable_zerocopy_send_server": true, 00:37:01.376 "enable_zerocopy_send_client": false, 00:37:01.376 "zerocopy_threshold": 0, 00:37:01.376 "tls_version": 0, 00:37:01.376 "enable_ktls": false 00:37:01.376 } 00:37:01.376 } 00:37:01.376 ] 00:37:01.376 }, 00:37:01.376 { 00:37:01.376 "subsystem": "vmd", 00:37:01.376 "config": [] 00:37:01.376 }, 00:37:01.376 { 00:37:01.376 "subsystem": "accel", 00:37:01.376 "config": [ 00:37:01.376 { 00:37:01.376 "method": "accel_set_options", 00:37:01.376 "params": { 00:37:01.376 "small_cache_size": 128, 00:37:01.376 "large_cache_size": 16, 00:37:01.376 "task_count": 2048, 00:37:01.376 "sequence_count": 2048, 00:37:01.376 "buf_count": 2048 00:37:01.377 } 00:37:01.377 } 00:37:01.377 ] 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "subsystem": "bdev", 00:37:01.377 "config": [ 00:37:01.377 { 00:37:01.377 "method": "bdev_set_options", 00:37:01.377 "params": { 00:37:01.377 "bdev_io_pool_size": 65535, 00:37:01.377 "bdev_io_cache_size": 256, 00:37:01.377 "bdev_auto_examine": true, 00:37:01.377 "iobuf_small_cache_size": 128, 00:37:01.377 "iobuf_large_cache_size": 16 00:37:01.377 } 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "method": "bdev_raid_set_options", 00:37:01.377 "params": { 00:37:01.377 "process_window_size_kb": 1024 00:37:01.377 } 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "method": "bdev_iscsi_set_options", 00:37:01.377 "params": { 00:37:01.377 "timeout_sec": 30 00:37:01.377 } 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "method": "bdev_nvme_set_options", 00:37:01.377 "params": { 00:37:01.377 "action_on_timeout": "none", 00:37:01.377 "timeout_us": 0, 00:37:01.377 "timeout_admin_us": 0, 00:37:01.377 "keep_alive_timeout_ms": 10000, 00:37:01.377 "arbitration_burst": 0, 00:37:01.377 "low_priority_weight": 0, 00:37:01.377 "medium_priority_weight": 0, 00:37:01.377 "high_priority_weight": 0, 00:37:01.377 "nvme_adminq_poll_period_us": 10000, 00:37:01.377 "nvme_ioq_poll_period_us": 0, 00:37:01.377 "io_queue_requests": 512, 00:37:01.377 "delay_cmd_submit": true, 00:37:01.377 "transport_retry_count": 4, 00:37:01.377 "bdev_retry_count": 3, 00:37:01.377 "transport_ack_timeout": 0, 00:37:01.377 "ctrlr_loss_timeout_sec": 0, 00:37:01.377 "reconnect_delay_sec": 0, 00:37:01.377 "fast_io_fail_timeout_sec": 0, 00:37:01.377 "disable_auto_failback": false, 00:37:01.377 "generate_uuids": false, 00:37:01.377 "transport_tos": 0, 00:37:01.377 "nvme_error_stat": false, 00:37:01.377 "rdma_srq_size": 0, 00:37:01.377 "io_path_stat": false, 00:37:01.377 "allow_accel_sequence": false, 00:37:01.377 "rdma_max_cq_size": 0, 00:37:01.377 "rdma_cm_event_timeout_ms": 0, 00:37:01.377 "dhchap_digests": [ 00:37:01.377 "sha256", 00:37:01.377 16:58:08 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:01.377 "sha384", 00:37:01.377 "sha512" 00:37:01.377 ], 00:37:01.377 "dhchap_dhgroups": [ 00:37:01.377 "null", 00:37:01.377 "ffdhe2048", 00:37:01.377 "ffdhe3072", 00:37:01.377 "ffdhe4096", 00:37:01.377 "ffdhe6144", 00:37:01.377 "ffdhe8192" 00:37:01.377 ] 00:37:01.377 } 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "method": "bdev_nvme_attach_controller", 00:37:01.377 "params": { 00:37:01.377 "name": "nvme0", 00:37:01.377 "trtype": "TCP", 00:37:01.377 "adrfam": "IPv4", 00:37:01.377 "traddr": "127.0.0.1", 00:37:01.377 "trsvcid": "4420", 00:37:01.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:01.377 "prchk_reftag": false, 00:37:01.377 "prchk_guard": false, 00:37:01.377 "ctrlr_loss_timeout_sec": 0, 00:37:01.377 "reconnect_delay_sec": 0, 00:37:01.377 "fast_io_fail_timeout_sec": 0, 00:37:01.377 "psk": "key0", 00:37:01.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:01.377 "hdgst": false, 00:37:01.377 "ddgst": false 00:37:01.377 } 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "method": "bdev_nvme_set_hotplug", 00:37:01.377 "params": { 00:37:01.377 "period_us": 100000, 00:37:01.377 "enable": false 00:37:01.377 } 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "method": "bdev_wait_for_examine" 00:37:01.377 } 00:37:01.377 ] 00:37:01.377 }, 00:37:01.377 { 00:37:01.377 "subsystem": "nbd", 00:37:01.377 "config": [] 00:37:01.377 } 00:37:01.377 ] 00:37:01.377 }' 00:37:01.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:01.377 16:58:08 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:01.377 16:58:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:01.636 [2024-05-15 16:58:08.609049] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 22.11.4 initialization... 00:37:01.636 [2024-05-15 16:58:08.609126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970654 ] 00:37:01.636 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.636 [2024-05-15 16:58:08.678373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.636 [2024-05-15 16:58:08.763795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.894 [2024-05-15 16:58:08.946797] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:02.460 16:58:09 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:02.460 16:58:09 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:02.460 16:58:09 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:02.460 16:58:09 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:02.460 16:58:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.718 16:58:09 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:02.718 16:58:09 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:02.718 16:58:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:02.718 16:58:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.718 16:58:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.718 16:58:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.718 16:58:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.976 16:58:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:02.976 16:58:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:02.976 16:58:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:02.976 16:58:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.976 16:58:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.976 16:58:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.976 16:58:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:03.235 16:58:10 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:03.235 16:58:10 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:03.235 16:58:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:03.235 16:58:10 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:03.493 16:58:10 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:03.493 16:58:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:03.493 16:58:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dHd6hGIRGU /tmp/tmp.Zvmq05ljnT 00:37:03.493 16:58:10 keyring_file -- keyring/file.sh@20 -- # killprocess 1970654 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1970654 ']' 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1970654 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1970654 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1970654' 00:37:03.493 killing process with pid 1970654 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@965 -- # kill 1970654 00:37:03.493 Received shutdown signal, test time was about 1.000000 seconds 00:37:03.493 00:37:03.493 Latency(us) 00:37:03.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.493 =================================================================================================================== 00:37:03.493 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:03.493 16:58:10 keyring_file -- common/autotest_common.sh@970 -- # wait 1970654 00:37:03.751 16:58:10 keyring_file -- keyring/file.sh@21 -- # killprocess 1969188 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1969188 ']' 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1969188 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1969188 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1969188' 00:37:03.751 killing process with pid 1969188 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@965 -- # kill 1969188 00:37:03.751 [2024-05-15 16:58:10.844467] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:03.751 [2024-05-15 16:58:10.844544] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:03.751 16:58:10 keyring_file -- common/autotest_common.sh@970 -- # wait 1969188 00:37:04.317 00:37:04.317 real 0m13.988s 00:37:04.317 user 0m34.633s 00:37:04.317 sys 0m3.275s 00:37:04.317 16:58:11 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:04.317 16:58:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:04.317 ************************************ 00:37:04.317 END TEST keyring_file 00:37:04.317 ************************************ 00:37:04.317 16:58:11 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:37:04.317 16:58:11 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:04.317 16:58:11 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:37:04.317 16:58:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:04.317 16:58:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:04.317 16:58:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:04.317 16:58:11 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:37:04.317 16:58:11 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:37:04.317 16:58:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:04.317 16:58:11 -- common/autotest_common.sh@10 -- # set +x 00:37:04.317 16:58:11 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:37:04.317 16:58:11 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:04.317 16:58:11 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:04.317 16:58:11 -- common/autotest_common.sh@10 -- # set +x 00:37:05.691 INFO: APP EXITING 00:37:05.691 INFO: killing all VMs 00:37:05.691 INFO: killing vhost app 00:37:05.691 INFO: EXIT DONE 00:37:07.064 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:07.064 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:07.064 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:07.064 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:07.064 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:07.064 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:07.064 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:07.064 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:07.064 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:37:07.064 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:07.064 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:07.064 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:07.064 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:07.064 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:07.064 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:07.064 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:07.064 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:08.444 Cleaning 00:37:08.444 Removing: /var/run/dpdk/spdk0/config 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:08.444 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:08.444 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:08.444 Removing: /var/run/dpdk/spdk1/config 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:08.444 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:08.444 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:08.444 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:08.444 Removing: /var/run/dpdk/spdk2/config 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:08.444 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:08.444 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:08.444 Removing: /var/run/dpdk/spdk3/config 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:08.444 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:08.444 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:08.444 Removing: /var/run/dpdk/spdk4/config 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:08.444 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:08.444 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:08.444 Removing: /dev/shm/bdev_svc_trace.1 00:37:08.444 Removing: /dev/shm/nvmf_trace.0 00:37:08.444 Removing: /dev/shm/spdk_tgt_trace.pid1635059 00:37:08.444 Removing: /var/run/dpdk/spdk0 00:37:08.444 Removing: /var/run/dpdk/spdk1 00:37:08.444 Removing: /var/run/dpdk/spdk2 00:37:08.703 Removing: /var/run/dpdk/spdk3 00:37:08.703 Removing: /var/run/dpdk/spdk4 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1633511 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1634244 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1635059 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1635495 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1636177 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1636324 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1637055 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1637073 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1637309 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1638615 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1640160 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1640347 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1640544 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1640860 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1641048 00:37:08.703 Removing: /var/run/dpdk/spdk_pid1641205 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1641363 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1641543 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1642130 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1644477 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1644648 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1644808 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1644931 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1645241 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1645364 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1645671 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1645695 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1645969 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1645981 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1646148 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1646276 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1646643 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1646797 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1646990 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1647158 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1647309 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1647369 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1647536 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1647805 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1647962 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1648115 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1648396 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1648554 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1648711 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1648931 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1649139 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1649299 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1649450 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1649729 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1649885 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1650044 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1650297 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1650476 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1650635 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1650797 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1651068 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1651226 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1651412 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1651617 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1653979 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1709758 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1712659 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1719904 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1723484 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1726124 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1726535 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1734857 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1734975 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1735635 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1736288 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1736941 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1737309 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1737350 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1737490 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1737623 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1737631 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1738286 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1738867 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1739478 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1739880 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1739902 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1740141 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1741019 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1741738 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1747382 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1747659 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1750513 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1754559 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1756723 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1763684 00:37:08.704 Removing: /var/run/dpdk/spdk_pid1770195 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1771383 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1772048 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1783224 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1785733 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1809781 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1812968 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1814027 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1815341 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1815478 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1815499 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1815638 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1816070 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1817290 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1818004 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1818421 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1819991 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1820349 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1820902 00:37:08.962 Removing: /var/run/dpdk/spdk_pid1823595 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1827872 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1831408 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1855543 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1858690 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1862733 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1863678 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1864706 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1867614 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1870255 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1875050 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1875052 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1878224 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1878360 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1878496 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1878758 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1878772 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1879855 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1881143 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1882319 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1883494 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1884674 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1885848 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1889802 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1890757 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1891773 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1892361 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1896225 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1898075 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1901781 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1905518 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1912148 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1916876 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1916878 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1930483 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1930975 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1931411 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1931817 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1932396 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1932805 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1933209 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1933619 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1936411 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1936672 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1940752 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1940919 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1942524 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1947726 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1947848 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1951133 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1952526 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1953933 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1954710 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1956197 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1957169 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1963476 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1963838 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1964226 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1965883 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1966157 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1966554 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1969188 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1969200 00:37:08.963 Removing: /var/run/dpdk/spdk_pid1970654 00:37:08.963 Clean 00:37:09.221 16:58:16 -- common/autotest_common.sh@1447 -- # return 0 00:37:09.221 16:58:16 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:37:09.221 16:58:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.221 16:58:16 -- common/autotest_common.sh@10 -- # set +x 00:37:09.221 16:58:16 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:37:09.221 16:58:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.221 16:58:16 -- common/autotest_common.sh@10 -- # set +x 00:37:09.221 16:58:16 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:09.221 16:58:16 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:09.221 16:58:16 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:09.221 16:58:16 -- spdk/autotest.sh@387 -- # hash lcov 00:37:09.221 16:58:16 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:09.221 16:58:16 -- spdk/autotest.sh@389 -- # hostname 00:37:09.221 16:58:16 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:09.479 geninfo: WARNING: invalid characters removed from testname! 00:37:41.560 16:58:43 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.818 16:58:49 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:45.092 16:58:51 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.616 16:58:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:50.892 16:58:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:53.417 16:59:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:56.693 16:59:03 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:56.693 16:59:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.693 16:59:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:56.693 16:59:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.693 16:59:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.693 16:59:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.693 16:59:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.693 16:59:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.693 16:59:03 -- paths/export.sh@5 -- $ export PATH 00:37:56.693 16:59:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.693 16:59:03 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:56.693 16:59:03 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:56.693 16:59:03 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715785143.XXXXXX 00:37:56.693 16:59:03 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715785143.ELe41V 00:37:56.693 16:59:03 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:56.693 16:59:03 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:37:56.693 16:59:03 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:56.693 16:59:03 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:56.693 16:59:03 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:56.693 16:59:03 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:56.693 16:59:03 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:56.693 16:59:03 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:56.693 16:59:03 -- common/autotest_common.sh@10 -- $ set +x 00:37:56.693 16:59:03 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:56.693 16:59:03 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:56.693 16:59:03 -- pm/common@17 -- $ local monitor 00:37:56.693 16:59:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.693 16:59:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.693 16:59:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.693 16:59:03 -- pm/common@21 -- $ date +%s 00:37:56.693 16:59:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.693 16:59:03 -- pm/common@21 -- $ date +%s 00:37:56.693 16:59:03 -- pm/common@25 -- $ sleep 1 00:37:56.693 16:59:03 -- pm/common@21 -- $ date +%s 00:37:56.693 16:59:03 -- pm/common@21 -- $ date +%s 00:37:56.693 16:59:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715785143 00:37:56.693 16:59:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715785143 00:37:56.693 16:59:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715785143 00:37:56.693 16:59:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715785143 00:37:56.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715785143_collect-vmstat.pm.log 00:37:56.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715785143_collect-cpu-load.pm.log 00:37:56.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715785143_collect-cpu-temp.pm.log 00:37:56.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715785143_collect-bmc-pm.bmc.pm.log 00:37:57.260 16:59:04 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:57.260 16:59:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:57.260 16:59:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:57.260 16:59:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:57.260 16:59:04 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:57.260 16:59:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:57.260 16:59:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:57.260 16:59:04 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:57.260 16:59:04 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:57.260 16:59:04 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:57.260 16:59:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:57.260 16:59:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:57.260 16:59:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:57.260 16:59:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:57.260 16:59:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.260 16:59:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:57.260 16:59:04 -- pm/common@44 -- $ pid=1981794 00:37:57.260 16:59:04 -- pm/common@50 -- $ kill -TERM 1981794 00:37:57.260 16:59:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.260 16:59:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:57.260 16:59:04 -- pm/common@44 -- $ pid=1981796 00:37:57.260 16:59:04 -- pm/common@50 -- $ kill -TERM 1981796 00:37:57.260 16:59:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.260 16:59:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:57.260 16:59:04 -- pm/common@44 -- $ pid=1981797 00:37:57.260 16:59:04 -- pm/common@50 -- $ kill -TERM 1981797 00:37:57.260 16:59:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.260 16:59:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:57.260 16:59:04 -- pm/common@44 -- $ pid=1981832 00:37:57.260 16:59:04 -- pm/common@50 -- $ sudo -E kill -TERM 1981832 00:37:57.518 + [[ -n 1527381 ]] 00:37:57.518 + sudo kill 1527381 00:37:57.528 [Pipeline] } 00:37:57.545 [Pipeline] // stage 00:37:57.550 [Pipeline] } 00:37:57.566 [Pipeline] // timeout 00:37:57.572 [Pipeline] } 00:37:57.586 [Pipeline] // catchError 00:37:57.591 [Pipeline] } 00:37:57.607 [Pipeline] // wrap 00:37:57.613 [Pipeline] } 00:37:57.629 [Pipeline] // catchError 00:37:57.637 [Pipeline] stage 00:37:57.638 [Pipeline] { (Epilogue) 00:37:57.652 [Pipeline] catchError 00:37:57.654 [Pipeline] { 00:37:57.668 [Pipeline] echo 00:37:57.669 Cleanup processes 00:37:57.675 [Pipeline] sh 00:37:57.952 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:57.952 1981969 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:57.952 1982061 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:57.964 [Pipeline] sh 00:37:58.289 ++ grep -v 'sudo pgrep' 00:37:58.289 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:58.289 ++ awk '{print $1}' 00:37:58.289 + sudo kill -9 1981969 00:37:58.300 [Pipeline] sh 00:37:58.578 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:08.560 [Pipeline] sh 00:38:08.840 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:08.840 Artifacts sizes are good 00:38:08.854 [Pipeline] archiveArtifacts 00:38:08.861 Archiving artifacts 00:38:09.062 [Pipeline] sh 00:38:09.342 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:09.354 [Pipeline] cleanWs 00:38:09.362 [WS-CLEANUP] Deleting project workspace... 00:38:09.362 [WS-CLEANUP] Deferred wipeout is used... 00:38:09.367 [WS-CLEANUP] done 00:38:09.369 [Pipeline] } 00:38:09.389 [Pipeline] // catchError 00:38:09.401 [Pipeline] sh 00:38:09.678 + logger -p user.info -t JENKINS-CI 00:38:09.685 [Pipeline] } 00:38:09.700 [Pipeline] // stage 00:38:09.705 [Pipeline] } 00:38:09.723 [Pipeline] // node 00:38:09.728 [Pipeline] End of Pipeline 00:38:09.758 Finished: SUCCESS